00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 925 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3586 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.150 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.151 The recommended git tool is: git 00:00:00.151 using credential 00000000-0000-0000-0000-000000000002 00:00:00.153 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.204 Fetching changes from the remote Git repository 00:00:00.208 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.253 Using shallow fetch with depth 1 00:00:00.253 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.253 > git --version # timeout=10 00:00:00.288 > git --version # 'git version 2.39.2' 00:00:00.288 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.310 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.310 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.219 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.230 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.243 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.243 > git config core.sparsecheckout # timeout=10 00:00:06.254 > git read-tree -mu HEAD # timeout=10 00:00:06.268 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.287 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.287 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.413 [Pipeline] Start of Pipeline 00:00:06.427 [Pipeline] library 00:00:06.428 Loading library shm_lib@master 00:00:06.428 Library shm_lib@master is cached. Copying from home. 00:00:06.441 [Pipeline] node 00:00:06.456 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.459 [Pipeline] { 00:00:06.469 [Pipeline] catchError 00:00:06.471 [Pipeline] { 00:00:06.483 [Pipeline] wrap 00:00:06.494 [Pipeline] { 00:00:06.533 [Pipeline] stage 00:00:06.536 [Pipeline] { (Prologue) 00:00:06.559 [Pipeline] echo 00:00:06.561 Node: VM-host-SM9 00:00:06.570 [Pipeline] cleanWs 00:00:06.580 [WS-CLEANUP] Deleting project workspace... 00:00:06.580 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.587 [WS-CLEANUP] done 00:00:06.788 [Pipeline] setCustomBuildProperty 00:00:06.855 [Pipeline] httpRequest 00:00:07.224 [Pipeline] echo 00:00:07.226 Sorcerer 10.211.164.101 is alive 00:00:07.234 [Pipeline] retry 00:00:07.235 [Pipeline] { 00:00:07.248 [Pipeline] httpRequest 00:00:07.252 HttpMethod: GET 00:00:07.253 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.254 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.255 Response Code: HTTP/1.1 200 OK 00:00:07.256 Success: Status code 200 is in the accepted range: 200,404 00:00:07.256 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:08.117 [Pipeline] } 00:00:08.133 [Pipeline] // retry 00:00:08.139 [Pipeline] sh 00:00:08.413 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:08.426 [Pipeline] httpRequest 00:00:09.380 [Pipeline] echo 00:00:09.381 Sorcerer 10.211.164.101 is alive 00:00:09.390 [Pipeline] retry 00:00:09.392 [Pipeline] { 00:00:09.406 [Pipeline] httpRequest 00:00:09.410 HttpMethod: GET 00:00:09.411 URL: http://10.211.164.101/packages/spdk_12fc2abf1e54ef44d6ae9091ab879722d4e15e60.tar.gz 00:00:09.411 Sending request to url: http://10.211.164.101/packages/spdk_12fc2abf1e54ef44d6ae9091ab879722d4e15e60.tar.gz 00:00:09.412 Response Code: HTTP/1.1 200 OK 00:00:09.413 Success: Status code 200 is in the accepted range: 200,404 00:00:09.414 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_12fc2abf1e54ef44d6ae9091ab879722d4e15e60.tar.gz 00:00:32.557 [Pipeline] } 00:00:32.574 [Pipeline] // retry 00:00:32.581 [Pipeline] sh 00:00:32.862 + tar --no-same-owner -xf spdk_12fc2abf1e54ef44d6ae9091ab879722d4e15e60.tar.gz 00:00:35.409 [Pipeline] sh 00:00:35.689 + git -C spdk log --oneline -n5 00:00:35.689 12fc2abf1 test: Remove autopackage.sh 00:00:35.689 83ba90867 fio/bdev: fix typo in README 00:00:35.689 45379ed84 module/compress: Cleanup vol data, when claim fails 00:00:35.689 0afe95a3a bdev/nvme: use bdev_nvme linker script 00:00:35.689 1cbacb58f test/nvmf: Clarify comment about lack of support for iWARP in tests 00:00:35.706 [Pipeline] withCredentials 00:00:35.715 > git --version # timeout=10 00:00:35.726 > git --version # 'git version 2.39.2' 00:00:35.742 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:35.744 [Pipeline] { 00:00:35.752 [Pipeline] retry 00:00:35.754 [Pipeline] { 00:00:35.767 [Pipeline] sh 00:00:36.045 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:36.316 [Pipeline] } 00:00:36.335 [Pipeline] // retry 00:00:36.342 [Pipeline] } 00:00:36.361 [Pipeline] // withCredentials 00:00:36.371 [Pipeline] httpRequest 00:00:37.380 [Pipeline] echo 00:00:37.382 Sorcerer 10.211.164.101 is alive 00:00:37.393 [Pipeline] retry 00:00:37.395 [Pipeline] { 00:00:37.410 [Pipeline] httpRequest 00:00:37.416 HttpMethod: GET 00:00:37.416 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:37.417 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:37.430 Response Code: HTTP/1.1 200 OK 00:00:37.431 Success: Status code 200 is in the accepted range: 200,404 00:00:37.432 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:14.146 [Pipeline] } 00:01:14.165 [Pipeline] // retry 00:01:14.172 [Pipeline] sh 00:01:14.453 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:15.843 [Pipeline] sh 00:01:16.125 + git -C dpdk log --oneline -n5 00:01:16.125 eeb0605f11 version: 23.11.0 00:01:16.125 238778122a doc: update release notes for 23.11 00:01:16.125 46aa6b3cfc doc: fix description of RSS features 00:01:16.125 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:16.125 7e421ae345 devtools: support skipping forbid rule check 00:01:16.143 [Pipeline] writeFile 00:01:16.158 [Pipeline] sh 00:01:16.440 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:16.452 [Pipeline] sh 00:01:16.734 + cat autorun-spdk.conf 00:01:16.734 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.734 SPDK_TEST_NVMF=1 00:01:16.734 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.734 SPDK_TEST_URING=1 00:01:16.734 SPDK_TEST_VFIOUSER=1 00:01:16.734 SPDK_TEST_USDT=1 00:01:16.734 SPDK_RUN_UBSAN=1 00:01:16.734 NET_TYPE=virt 00:01:16.734 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:16.734 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:16.734 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.742 RUN_NIGHTLY=1 00:01:16.744 [Pipeline] } 00:01:16.758 [Pipeline] // stage 00:01:16.774 [Pipeline] stage 00:01:16.776 [Pipeline] { (Run VM) 00:01:16.790 [Pipeline] sh 00:01:17.072 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:17.073 + echo 'Start stage prepare_nvme.sh' 00:01:17.073 Start stage prepare_nvme.sh 00:01:17.073 + [[ -n 1 ]] 00:01:17.073 + disk_prefix=ex1 00:01:17.073 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:17.073 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:17.073 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:17.073 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.073 ++ SPDK_TEST_NVMF=1 00:01:17.073 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:17.073 ++ SPDK_TEST_URING=1 00:01:17.073 ++ SPDK_TEST_VFIOUSER=1 00:01:17.073 ++ SPDK_TEST_USDT=1 00:01:17.073 ++ SPDK_RUN_UBSAN=1 00:01:17.073 ++ NET_TYPE=virt 00:01:17.073 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:17.073 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:17.073 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.073 ++ RUN_NIGHTLY=1 00:01:17.073 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:17.073 + nvme_files=() 00:01:17.073 + declare -A nvme_files 00:01:17.073 + backend_dir=/var/lib/libvirt/images/backends 00:01:17.073 + nvme_files['nvme.img']=5G 00:01:17.073 + nvme_files['nvme-cmb.img']=5G 00:01:17.073 + nvme_files['nvme-multi0.img']=4G 00:01:17.073 + nvme_files['nvme-multi1.img']=4G 00:01:17.073 + nvme_files['nvme-multi2.img']=4G 00:01:17.073 + nvme_files['nvme-openstack.img']=8G 00:01:17.073 + nvme_files['nvme-zns.img']=5G 00:01:17.073 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:17.073 + (( SPDK_TEST_FTL == 1 )) 00:01:17.073 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:17.073 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:17.073 + for nvme in "${!nvme_files[@]}" 00:01:17.073 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:17.073 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.073 + for nvme in "${!nvme_files[@]}" 00:01:17.073 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:17.073 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.073 + for nvme in "${!nvme_files[@]}" 00:01:17.073 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:17.332 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:17.332 + for nvme in "${!nvme_files[@]}" 00:01:17.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:17.332 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.332 + for nvme in "${!nvme_files[@]}" 00:01:17.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:17.332 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.332 + for nvme in "${!nvme_files[@]}" 00:01:17.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:17.332 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:17.332 + for nvme in "${!nvme_files[@]}" 00:01:17.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:17.591 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:17.591 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:17.591 + echo 'End stage prepare_nvme.sh' 00:01:17.591 End stage prepare_nvme.sh 00:01:17.611 [Pipeline] sh 00:01:17.911 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:17.911 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:18.171 00:01:18.171 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:18.171 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:18.171 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:18.171 HELP=0 00:01:18.171 DRY_RUN=0 00:01:18.171 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:18.171 NVME_DISKS_TYPE=nvme,nvme, 00:01:18.171 NVME_AUTO_CREATE=0 00:01:18.171 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:18.171 NVME_CMB=,, 00:01:18.171 NVME_PMR=,, 00:01:18.171 NVME_ZNS=,, 00:01:18.171 NVME_MS=,, 00:01:18.171 NVME_FDP=,, 00:01:18.171 SPDK_VAGRANT_DISTRO=fedora39 00:01:18.171 SPDK_VAGRANT_VMCPU=10 00:01:18.171 SPDK_VAGRANT_VMRAM=12288 00:01:18.171 SPDK_VAGRANT_PROVIDER=libvirt 00:01:18.171 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:18.171 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:18.171 SPDK_OPENSTACK_NETWORK=0 00:01:18.171 VAGRANT_PACKAGE_BOX=0 00:01:18.171 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:18.171 FORCE_DISTRO=true 00:01:18.171 VAGRANT_BOX_VERSION= 00:01:18.171 EXTRA_VAGRANTFILES= 00:01:18.171 NIC_MODEL=e1000 00:01:18.171 00:01:18.171 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:18.171 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:21.462 Bringing machine 'default' up with 'libvirt' provider... 00:01:21.720 ==> default: Creating image (snapshot of base box volume). 00:01:21.720 ==> default: Creating domain with the following settings... 00:01:21.720 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730199026_0b88e7323e990e21d4b1 00:01:21.720 ==> default: -- Domain type: kvm 00:01:21.720 ==> default: -- Cpus: 10 00:01:21.720 ==> default: -- Feature: acpi 00:01:21.720 ==> default: -- Feature: apic 00:01:21.720 ==> default: -- Feature: pae 00:01:21.720 ==> default: -- Memory: 12288M 00:01:21.720 ==> default: -- Memory Backing: hugepages: 00:01:21.720 ==> default: -- Management MAC: 00:01:21.720 ==> default: -- Loader: 00:01:21.720 ==> default: -- Nvram: 00:01:21.720 ==> default: -- Base box: spdk/fedora39 00:01:21.720 ==> default: -- Storage pool: default 00:01:21.720 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730199026_0b88e7323e990e21d4b1.img (20G) 00:01:21.720 ==> default: -- Volume Cache: default 00:01:21.720 ==> default: -- Kernel: 00:01:21.720 ==> default: -- Initrd: 00:01:21.720 ==> default: -- Graphics Type: vnc 00:01:21.720 ==> default: -- Graphics Port: -1 00:01:21.720 ==> default: -- Graphics IP: 127.0.0.1 00:01:21.720 ==> default: -- Graphics Password: Not defined 00:01:21.720 ==> default: -- Video Type: cirrus 00:01:21.720 ==> default: -- Video VRAM: 9216 00:01:21.720 ==> default: -- Sound Type: 00:01:21.720 ==> default: -- Keymap: en-us 00:01:21.720 ==> default: -- TPM Path: 00:01:21.720 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:21.720 ==> default: -- Command line args: 00:01:21.720 ==> default: -> value=-device, 00:01:21.720 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:21.720 ==> default: -> value=-drive, 00:01:21.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:21.720 ==> default: -> value=-device, 00:01:21.720 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.720 ==> default: -> value=-device, 00:01:21.720 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:21.720 ==> default: -> value=-drive, 00:01:21.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:21.720 ==> default: -> value=-device, 00:01:21.720 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.720 ==> default: -> value=-drive, 00:01:21.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:21.720 ==> default: -> value=-device, 00:01:21.720 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.720 ==> default: -> value=-drive, 00:01:21.720 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:21.720 ==> default: -> value=-device, 00:01:21.720 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:21.980 ==> default: Creating shared folders metadata... 00:01:21.980 ==> default: Starting domain. 00:01:22.919 ==> default: Waiting for domain to get an IP address... 00:01:41.013 ==> default: Waiting for SSH to become available... 00:01:41.013 ==> default: Configuring and enabling network interfaces... 00:01:43.589 default: SSH address: 192.168.121.156:22 00:01:43.589 default: SSH username: vagrant 00:01:43.589 default: SSH auth method: private key 00:01:46.121 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:52.687 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:59.252 ==> default: Mounting SSHFS shared folder... 00:02:00.630 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:00.630 ==> default: Checking Mount.. 00:02:01.565 ==> default: Folder Successfully Mounted! 00:02:01.565 ==> default: Running provisioner: file... 00:02:02.501 default: ~/.gitconfig => .gitconfig 00:02:03.069 00:02:03.069 SUCCESS! 00:02:03.069 00:02:03.069 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:03.069 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:03.069 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:03.069 00:02:03.079 [Pipeline] } 00:02:03.094 [Pipeline] // stage 00:02:03.103 [Pipeline] dir 00:02:03.104 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:03.106 [Pipeline] { 00:02:03.118 [Pipeline] catchError 00:02:03.120 [Pipeline] { 00:02:03.133 [Pipeline] sh 00:02:03.413 + vagrant ssh-config --host vagrant 00:02:03.413 + sed -ne /^Host/,$p 00:02:03.413 + tee ssh_conf 00:02:06.699 Host vagrant 00:02:06.699 HostName 192.168.121.156 00:02:06.699 User vagrant 00:02:06.699 Port 22 00:02:06.699 UserKnownHostsFile /dev/null 00:02:06.699 StrictHostKeyChecking no 00:02:06.699 PasswordAuthentication no 00:02:06.699 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:06.699 IdentitiesOnly yes 00:02:06.699 LogLevel FATAL 00:02:06.699 ForwardAgent yes 00:02:06.699 ForwardX11 yes 00:02:06.699 00:02:06.713 [Pipeline] withEnv 00:02:06.715 [Pipeline] { 00:02:06.731 [Pipeline] sh 00:02:07.046 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:07.046 source /etc/os-release 00:02:07.046 [[ -e /image.version ]] && img=$(< /image.version) 00:02:07.046 # Minimal, systemd-like check. 00:02:07.046 if [[ -e /.dockerenv ]]; then 00:02:07.046 # Clear garbage from the node's name: 00:02:07.046 # agt-er_autotest_547-896 -> autotest_547-896 00:02:07.046 # $HOSTNAME is the actual container id 00:02:07.046 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:07.046 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:07.046 # We can assume this is a mount from a host where container is running, 00:02:07.046 # so fetch its hostname to easily identify the target swarm worker. 00:02:07.046 container="$(< /etc/hostname) ($agent)" 00:02:07.046 else 00:02:07.046 # Fallback 00:02:07.046 container=$agent 00:02:07.046 fi 00:02:07.046 fi 00:02:07.046 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:07.046 00:02:07.318 [Pipeline] } 00:02:07.340 [Pipeline] // withEnv 00:02:07.350 [Pipeline] setCustomBuildProperty 00:02:07.366 [Pipeline] stage 00:02:07.369 [Pipeline] { (Tests) 00:02:07.385 [Pipeline] sh 00:02:07.662 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:07.935 [Pipeline] sh 00:02:08.216 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:08.492 [Pipeline] timeout 00:02:08.493 Timeout set to expire in 1 hr 0 min 00:02:08.496 [Pipeline] { 00:02:08.512 [Pipeline] sh 00:02:08.793 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:09.361 HEAD is now at 12fc2abf1 test: Remove autopackage.sh 00:02:09.373 [Pipeline] sh 00:02:09.653 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:09.962 [Pipeline] sh 00:02:10.242 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:10.261 [Pipeline] sh 00:02:10.542 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:10.801 ++ readlink -f spdk_repo 00:02:10.801 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:10.801 + [[ -n /home/vagrant/spdk_repo ]] 00:02:10.801 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:10.801 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:10.801 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:10.801 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:10.801 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:10.801 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:10.801 + cd /home/vagrant/spdk_repo 00:02:10.801 + source /etc/os-release 00:02:10.801 ++ NAME='Fedora Linux' 00:02:10.801 ++ VERSION='39 (Cloud Edition)' 00:02:10.801 ++ ID=fedora 00:02:10.801 ++ VERSION_ID=39 00:02:10.801 ++ VERSION_CODENAME= 00:02:10.801 ++ PLATFORM_ID=platform:f39 00:02:10.801 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:10.802 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:10.802 ++ LOGO=fedora-logo-icon 00:02:10.802 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:10.802 ++ HOME_URL=https://fedoraproject.org/ 00:02:10.802 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:10.802 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:10.802 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:10.802 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:10.802 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:10.802 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:10.802 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:10.802 ++ SUPPORT_END=2024-11-12 00:02:10.802 ++ VARIANT='Cloud Edition' 00:02:10.802 ++ VARIANT_ID=cloud 00:02:10.802 + uname -a 00:02:10.802 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:10.802 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:11.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:11.370 Hugepages 00:02:11.370 node hugesize free / total 00:02:11.370 node0 1048576kB 0 / 0 00:02:11.370 node0 2048kB 0 / 0 00:02:11.370 00:02:11.370 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:11.370 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:11.370 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:11.370 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:11.370 + rm -f /tmp/spdk-ld-path 00:02:11.370 + source autorun-spdk.conf 00:02:11.370 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.370 ++ SPDK_TEST_NVMF=1 00:02:11.370 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.370 ++ SPDK_TEST_URING=1 00:02:11.370 ++ SPDK_TEST_VFIOUSER=1 00:02:11.370 ++ SPDK_TEST_USDT=1 00:02:11.370 ++ SPDK_RUN_UBSAN=1 00:02:11.370 ++ NET_TYPE=virt 00:02:11.370 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:11.370 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:11.370 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.370 ++ RUN_NIGHTLY=1 00:02:11.370 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:11.370 + [[ -n '' ]] 00:02:11.370 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:11.370 + for M in /var/spdk/build-*-manifest.txt 00:02:11.370 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:11.370 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.370 + for M in /var/spdk/build-*-manifest.txt 00:02:11.370 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:11.370 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.370 + for M in /var/spdk/build-*-manifest.txt 00:02:11.370 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:11.370 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.370 ++ uname 00:02:11.370 + [[ Linux == \L\i\n\u\x ]] 00:02:11.370 + sudo dmesg -T 00:02:11.370 + sudo dmesg --clear 00:02:11.370 + dmesg_pid=6000 00:02:11.370 + [[ Fedora Linux == FreeBSD ]] 00:02:11.370 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.370 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.370 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:11.370 + sudo dmesg -Tw 00:02:11.370 + [[ -x /usr/src/fio-static/fio ]] 00:02:11.370 + export FIO_BIN=/usr/src/fio-static/fio 00:02:11.370 + FIO_BIN=/usr/src/fio-static/fio 00:02:11.370 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:11.370 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:11.370 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:11.370 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.370 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.370 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:11.371 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.371 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.371 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.630 10:51:16 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:11.630 10:51:16 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.630 10:51:16 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.630 10:51:16 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:11.630 10:51:16 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.630 10:51:16 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:11.630 10:51:16 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:02:11.630 10:51:16 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:02:11.630 10:51:16 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:11.631 10:51:16 -- spdk_repo/autorun-spdk.conf@8 -- $ NET_TYPE=virt 00:02:11.631 10:51:16 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:11.631 10:51:16 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:11.631 10:51:16 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.631 10:51:16 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:11.631 10:51:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:11.631 10:51:16 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.631 10:51:16 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:11.631 10:51:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:11.631 10:51:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:11.631 10:51:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:11.631 10:51:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.631 10:51:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.631 10:51:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.631 10:51:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.631 10:51:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.631 10:51:16 -- paths/export.sh@5 -- $ export PATH 00:02:11.631 10:51:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.631 10:51:16 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:11.631 10:51:16 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:11.631 10:51:16 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730199076.XXXXXX 00:02:11.631 10:51:16 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730199076.Ldn4g3 00:02:11.631 10:51:16 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:11.631 10:51:16 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:02:11.631 10:51:16 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:11.631 10:51:16 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:11.631 10:51:16 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:11.631 10:51:16 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:11.631 10:51:16 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:11.631 10:51:16 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:11.631 10:51:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.631 10:51:16 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:11.631 10:51:16 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:11.631 10:51:16 -- pm/common@17 -- $ local monitor 00:02:11.631 10:51:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.631 10:51:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.631 10:51:16 -- pm/common@25 -- $ sleep 1 00:02:11.631 10:51:16 -- pm/common@21 -- $ date +%s 00:02:11.631 10:51:16 -- pm/common@21 -- $ date +%s 00:02:11.631 10:51:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730199076 00:02:11.631 10:51:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730199076 00:02:11.631 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730199076_collect-cpu-load.pm.log 00:02:11.631 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730199076_collect-vmstat.pm.log 00:02:12.567 10:51:17 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:12.567 10:51:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:12.568 10:51:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:12.568 10:51:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:12.568 10:51:17 -- spdk/autobuild.sh@16 -- $ date -u 00:02:12.568 Tue Oct 29 10:51:17 AM UTC 2024 00:02:12.568 10:51:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:12.568 v25.01-pre-123-g12fc2abf1 00:02:12.568 10:51:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:12.568 10:51:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:12.568 10:51:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:12.568 10:51:18 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:12.568 10:51:18 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:12.568 10:51:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.568 ************************************ 00:02:12.568 START TEST ubsan 00:02:12.568 ************************************ 00:02:12.568 using ubsan 00:02:12.568 10:51:18 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:12.568 00:02:12.568 real 0m0.000s 00:02:12.568 user 0m0.000s 00:02:12.568 sys 0m0.000s 00:02:12.568 10:51:18 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:12.568 10:51:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:12.568 ************************************ 00:02:12.568 END TEST ubsan 00:02:12.568 ************************************ 00:02:12.568 10:51:18 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:12.568 10:51:18 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:12.568 10:51:18 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:12.568 10:51:18 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:02:12.568 10:51:18 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:12.568 10:51:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.827 ************************************ 00:02:12.827 START TEST build_native_dpdk 00:02:12.827 ************************************ 00:02:12.827 10:51:18 build_native_dpdk -- common/autotest_common.sh@1127 -- $ _build_native_dpdk 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:12.827 eeb0605f11 version: 23.11.0 00:02:12.827 238778122a doc: update release notes for 23.11 00:02:12.827 46aa6b3cfc doc: fix description of RSS features 00:02:12.827 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:12.827 7e421ae345 devtools: support skipping forbid rule check 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:12.827 patching file config/rte_config.h 00:02:12.827 Hunk #1 succeeded at 60 (offset 1 line). 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:12.827 patching file lib/pcapng/rte_pcapng.c 00:02:12.827 10:51:18 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:12.827 10:51:18 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:12.828 10:51:18 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:12.828 10:51:18 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:12.828 10:51:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:12.828 10:51:18 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:12.828 10:51:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:12.828 10:51:18 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:18.098 The Meson build system 00:02:18.099 Version: 1.5.0 00:02:18.099 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:18.099 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:18.099 Build type: native build 00:02:18.099 Program cat found: YES (/usr/bin/cat) 00:02:18.099 Project name: DPDK 00:02:18.099 Project version: 23.11.0 00:02:18.099 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:18.099 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:18.099 Host machine cpu family: x86_64 00:02:18.099 Host machine cpu: x86_64 00:02:18.099 Message: ## Building in Developer Mode ## 00:02:18.099 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:18.099 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:18.099 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:18.099 Program python3 found: YES (/usr/bin/python3) 00:02:18.099 Program cat found: YES (/usr/bin/cat) 00:02:18.099 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:18.099 Compiler for C supports arguments -march=native: YES 00:02:18.099 Checking for size of "void *" : 8 00:02:18.099 Checking for size of "void *" : 8 (cached) 00:02:18.099 Library m found: YES 00:02:18.099 Library numa found: YES 00:02:18.099 Has header "numaif.h" : YES 00:02:18.099 Library fdt found: NO 00:02:18.099 Library execinfo found: NO 00:02:18.099 Has header "execinfo.h" : YES 00:02:18.099 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:18.099 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:18.099 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:18.099 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:18.099 Run-time dependency openssl found: YES 3.1.1 00:02:18.099 Run-time dependency libpcap found: YES 1.10.4 00:02:18.099 Has header "pcap.h" with dependency libpcap: YES 00:02:18.099 Compiler for C supports arguments -Wcast-qual: YES 00:02:18.099 Compiler for C supports arguments -Wdeprecated: YES 00:02:18.099 Compiler for C supports arguments -Wformat: YES 00:02:18.099 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:18.099 Compiler for C supports arguments -Wformat-security: NO 00:02:18.099 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.099 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:18.099 Compiler for C supports arguments -Wnested-externs: YES 00:02:18.099 Compiler for C supports arguments -Wold-style-definition: YES 00:02:18.099 Compiler for C supports arguments -Wpointer-arith: YES 00:02:18.099 Compiler for C supports arguments -Wsign-compare: YES 00:02:18.099 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:18.099 Compiler for C supports arguments -Wundef: YES 00:02:18.099 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.099 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:18.099 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:18.099 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.099 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:18.099 Program objdump found: YES (/usr/bin/objdump) 00:02:18.099 Compiler for C supports arguments -mavx512f: YES 00:02:18.099 Checking if "AVX512 checking" compiles: YES 00:02:18.099 Fetching value of define "__SSE4_2__" : 1 00:02:18.099 Fetching value of define "__AES__" : 1 00:02:18.099 Fetching value of define "__AVX__" : 1 00:02:18.099 Fetching value of define "__AVX2__" : 1 00:02:18.099 Fetching value of define "__AVX512BW__" : (undefined) 00:02:18.099 Fetching value of define "__AVX512CD__" : (undefined) 00:02:18.099 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:18.099 Fetching value of define "__AVX512F__" : (undefined) 00:02:18.099 Fetching value of define "__AVX512VL__" : (undefined) 00:02:18.099 Fetching value of define "__PCLMUL__" : 1 00:02:18.099 Fetching value of define "__RDRND__" : 1 00:02:18.099 Fetching value of define "__RDSEED__" : 1 00:02:18.099 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:18.099 Fetching value of define "__znver1__" : (undefined) 00:02:18.099 Fetching value of define "__znver2__" : (undefined) 00:02:18.099 Fetching value of define "__znver3__" : (undefined) 00:02:18.099 Fetching value of define "__znver4__" : (undefined) 00:02:18.099 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:18.099 Message: lib/log: Defining dependency "log" 00:02:18.099 Message: lib/kvargs: Defining dependency "kvargs" 00:02:18.099 Message: lib/telemetry: Defining dependency "telemetry" 00:02:18.099 Checking for function "getentropy" : NO 00:02:18.099 Message: lib/eal: Defining dependency "eal" 00:02:18.099 Message: lib/ring: Defining dependency "ring" 00:02:18.099 Message: lib/rcu: Defining dependency "rcu" 00:02:18.099 Message: lib/mempool: Defining dependency "mempool" 00:02:18.099 Message: lib/mbuf: Defining dependency "mbuf" 00:02:18.099 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:18.099 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.099 Compiler for C supports arguments -mpclmul: YES 00:02:18.099 Compiler for C supports arguments -maes: YES 00:02:18.099 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:18.099 Compiler for C supports arguments -mavx512bw: YES 00:02:18.099 Compiler for C supports arguments -mavx512dq: YES 00:02:18.099 Compiler for C supports arguments -mavx512vl: YES 00:02:18.099 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:18.099 Compiler for C supports arguments -mavx2: YES 00:02:18.099 Compiler for C supports arguments -mavx: YES 00:02:18.099 Message: lib/net: Defining dependency "net" 00:02:18.099 Message: lib/meter: Defining dependency "meter" 00:02:18.099 Message: lib/ethdev: Defining dependency "ethdev" 00:02:18.099 Message: lib/pci: Defining dependency "pci" 00:02:18.099 Message: lib/cmdline: Defining dependency "cmdline" 00:02:18.099 Message: lib/metrics: Defining dependency "metrics" 00:02:18.099 Message: lib/hash: Defining dependency "hash" 00:02:18.099 Message: lib/timer: Defining dependency "timer" 00:02:18.099 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.099 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:18.099 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:18.099 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:18.099 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:18.099 Message: lib/acl: Defining dependency "acl" 00:02:18.099 Message: lib/bbdev: Defining dependency "bbdev" 00:02:18.099 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:18.099 Run-time dependency libelf found: YES 0.191 00:02:18.099 Message: lib/bpf: Defining dependency "bpf" 00:02:18.099 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:18.099 Message: lib/compressdev: Defining dependency "compressdev" 00:02:18.099 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:18.099 Message: lib/distributor: Defining dependency "distributor" 00:02:18.099 Message: lib/dmadev: Defining dependency "dmadev" 00:02:18.099 Message: lib/efd: Defining dependency "efd" 00:02:18.099 Message: lib/eventdev: Defining dependency "eventdev" 00:02:18.099 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:18.099 Message: lib/gpudev: Defining dependency "gpudev" 00:02:18.099 Message: lib/gro: Defining dependency "gro" 00:02:18.099 Message: lib/gso: Defining dependency "gso" 00:02:18.099 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:18.099 Message: lib/jobstats: Defining dependency "jobstats" 00:02:18.099 Message: lib/latencystats: Defining dependency "latencystats" 00:02:18.099 Message: lib/lpm: Defining dependency "lpm" 00:02:18.099 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.099 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:18.099 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:18.099 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:18.099 Message: lib/member: Defining dependency "member" 00:02:18.099 Message: lib/pcapng: Defining dependency "pcapng" 00:02:18.099 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:18.099 Message: lib/power: Defining dependency "power" 00:02:18.099 Message: lib/rawdev: Defining dependency "rawdev" 00:02:18.099 Message: lib/regexdev: Defining dependency "regexdev" 00:02:18.099 Message: lib/mldev: Defining dependency "mldev" 00:02:18.099 Message: lib/rib: Defining dependency "rib" 00:02:18.099 Message: lib/reorder: Defining dependency "reorder" 00:02:18.099 Message: lib/sched: Defining dependency "sched" 00:02:18.099 Message: lib/security: Defining dependency "security" 00:02:18.099 Message: lib/stack: Defining dependency "stack" 00:02:18.099 Has header "linux/userfaultfd.h" : YES 00:02:18.099 Has header "linux/vduse.h" : YES 00:02:18.099 Message: lib/vhost: Defining dependency "vhost" 00:02:18.099 Message: lib/ipsec: Defining dependency "ipsec" 00:02:18.099 Message: lib/pdcp: Defining dependency "pdcp" 00:02:18.099 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.099 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:18.099 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:18.099 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:18.099 Message: lib/fib: Defining dependency "fib" 00:02:18.099 Message: lib/port: Defining dependency "port" 00:02:18.099 Message: lib/pdump: Defining dependency "pdump" 00:02:18.099 Message: lib/table: Defining dependency "table" 00:02:18.099 Message: lib/pipeline: Defining dependency "pipeline" 00:02:18.099 Message: lib/graph: Defining dependency "graph" 00:02:18.099 Message: lib/node: Defining dependency "node" 00:02:18.099 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:20.016 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:20.016 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:20.016 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:20.016 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:20.016 Compiler for C supports arguments -Wno-unused-value: YES 00:02:20.016 Compiler for C supports arguments -Wno-format: YES 00:02:20.016 Compiler for C supports arguments -Wno-format-security: YES 00:02:20.016 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:20.016 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:20.016 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:20.016 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:20.016 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:20.016 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:20.016 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:20.016 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:20.016 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:20.016 Has header "sys/epoll.h" : YES 00:02:20.016 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:20.016 Configuring doxy-api-html.conf using configuration 00:02:20.016 Configuring doxy-api-man.conf using configuration 00:02:20.016 Program mandb found: YES (/usr/bin/mandb) 00:02:20.016 Program sphinx-build found: NO 00:02:20.016 Configuring rte_build_config.h using configuration 00:02:20.016 Message: 00:02:20.016 ================= 00:02:20.016 Applications Enabled 00:02:20.016 ================= 00:02:20.016 00:02:20.016 apps: 00:02:20.016 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:20.016 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:20.016 test-pmd, test-regex, test-sad, test-security-perf, 00:02:20.016 00:02:20.016 Message: 00:02:20.016 ================= 00:02:20.016 Libraries Enabled 00:02:20.016 ================= 00:02:20.016 00:02:20.016 libs: 00:02:20.016 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:20.016 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:20.016 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:20.016 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:20.016 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:20.016 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:20.016 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:20.016 00:02:20.016 00:02:20.016 Message: 00:02:20.016 =============== 00:02:20.016 Drivers Enabled 00:02:20.016 =============== 00:02:20.016 00:02:20.016 common: 00:02:20.016 00:02:20.016 bus: 00:02:20.016 pci, vdev, 00:02:20.016 mempool: 00:02:20.016 ring, 00:02:20.016 dma: 00:02:20.016 00:02:20.016 net: 00:02:20.016 i40e, 00:02:20.016 raw: 00:02:20.016 00:02:20.016 crypto: 00:02:20.016 00:02:20.016 compress: 00:02:20.016 00:02:20.016 regex: 00:02:20.016 00:02:20.016 ml: 00:02:20.016 00:02:20.016 vdpa: 00:02:20.016 00:02:20.016 event: 00:02:20.016 00:02:20.016 baseband: 00:02:20.016 00:02:20.016 gpu: 00:02:20.016 00:02:20.016 00:02:20.016 Message: 00:02:20.016 ================= 00:02:20.016 Content Skipped 00:02:20.016 ================= 00:02:20.016 00:02:20.016 apps: 00:02:20.016 00:02:20.016 libs: 00:02:20.016 00:02:20.016 drivers: 00:02:20.016 common/cpt: not in enabled drivers build config 00:02:20.016 common/dpaax: not in enabled drivers build config 00:02:20.016 common/iavf: not in enabled drivers build config 00:02:20.016 common/idpf: not in enabled drivers build config 00:02:20.016 common/mvep: not in enabled drivers build config 00:02:20.016 common/octeontx: not in enabled drivers build config 00:02:20.016 bus/auxiliary: not in enabled drivers build config 00:02:20.016 bus/cdx: not in enabled drivers build config 00:02:20.016 bus/dpaa: not in enabled drivers build config 00:02:20.016 bus/fslmc: not in enabled drivers build config 00:02:20.016 bus/ifpga: not in enabled drivers build config 00:02:20.016 bus/platform: not in enabled drivers build config 00:02:20.016 bus/vmbus: not in enabled drivers build config 00:02:20.016 common/cnxk: not in enabled drivers build config 00:02:20.016 common/mlx5: not in enabled drivers build config 00:02:20.016 common/nfp: not in enabled drivers build config 00:02:20.016 common/qat: not in enabled drivers build config 00:02:20.016 common/sfc_efx: not in enabled drivers build config 00:02:20.016 mempool/bucket: not in enabled drivers build config 00:02:20.016 mempool/cnxk: not in enabled drivers build config 00:02:20.016 mempool/dpaa: not in enabled drivers build config 00:02:20.016 mempool/dpaa2: not in enabled drivers build config 00:02:20.016 mempool/octeontx: not in enabled drivers build config 00:02:20.016 mempool/stack: not in enabled drivers build config 00:02:20.016 dma/cnxk: not in enabled drivers build config 00:02:20.016 dma/dpaa: not in enabled drivers build config 00:02:20.016 dma/dpaa2: not in enabled drivers build config 00:02:20.016 dma/hisilicon: not in enabled drivers build config 00:02:20.016 dma/idxd: not in enabled drivers build config 00:02:20.016 dma/ioat: not in enabled drivers build config 00:02:20.016 dma/skeleton: not in enabled drivers build config 00:02:20.016 net/af_packet: not in enabled drivers build config 00:02:20.016 net/af_xdp: not in enabled drivers build config 00:02:20.016 net/ark: not in enabled drivers build config 00:02:20.016 net/atlantic: not in enabled drivers build config 00:02:20.016 net/avp: not in enabled drivers build config 00:02:20.016 net/axgbe: not in enabled drivers build config 00:02:20.016 net/bnx2x: not in enabled drivers build config 00:02:20.016 net/bnxt: not in enabled drivers build config 00:02:20.016 net/bonding: not in enabled drivers build config 00:02:20.016 net/cnxk: not in enabled drivers build config 00:02:20.016 net/cpfl: not in enabled drivers build config 00:02:20.016 net/cxgbe: not in enabled drivers build config 00:02:20.016 net/dpaa: not in enabled drivers build config 00:02:20.016 net/dpaa2: not in enabled drivers build config 00:02:20.016 net/e1000: not in enabled drivers build config 00:02:20.016 net/ena: not in enabled drivers build config 00:02:20.016 net/enetc: not in enabled drivers build config 00:02:20.016 net/enetfec: not in enabled drivers build config 00:02:20.016 net/enic: not in enabled drivers build config 00:02:20.016 net/failsafe: not in enabled drivers build config 00:02:20.016 net/fm10k: not in enabled drivers build config 00:02:20.016 net/gve: not in enabled drivers build config 00:02:20.016 net/hinic: not in enabled drivers build config 00:02:20.016 net/hns3: not in enabled drivers build config 00:02:20.016 net/iavf: not in enabled drivers build config 00:02:20.016 net/ice: not in enabled drivers build config 00:02:20.016 net/idpf: not in enabled drivers build config 00:02:20.016 net/igc: not in enabled drivers build config 00:02:20.016 net/ionic: not in enabled drivers build config 00:02:20.016 net/ipn3ke: not in enabled drivers build config 00:02:20.016 net/ixgbe: not in enabled drivers build config 00:02:20.016 net/mana: not in enabled drivers build config 00:02:20.016 net/memif: not in enabled drivers build config 00:02:20.016 net/mlx4: not in enabled drivers build config 00:02:20.016 net/mlx5: not in enabled drivers build config 00:02:20.016 net/mvneta: not in enabled drivers build config 00:02:20.016 net/mvpp2: not in enabled drivers build config 00:02:20.016 net/netvsc: not in enabled drivers build config 00:02:20.016 net/nfb: not in enabled drivers build config 00:02:20.016 net/nfp: not in enabled drivers build config 00:02:20.016 net/ngbe: not in enabled drivers build config 00:02:20.016 net/null: not in enabled drivers build config 00:02:20.016 net/octeontx: not in enabled drivers build config 00:02:20.016 net/octeon_ep: not in enabled drivers build config 00:02:20.016 net/pcap: not in enabled drivers build config 00:02:20.016 net/pfe: not in enabled drivers build config 00:02:20.016 net/qede: not in enabled drivers build config 00:02:20.016 net/ring: not in enabled drivers build config 00:02:20.016 net/sfc: not in enabled drivers build config 00:02:20.016 net/softnic: not in enabled drivers build config 00:02:20.016 net/tap: not in enabled drivers build config 00:02:20.016 net/thunderx: not in enabled drivers build config 00:02:20.016 net/txgbe: not in enabled drivers build config 00:02:20.016 net/vdev_netvsc: not in enabled drivers build config 00:02:20.016 net/vhost: not in enabled drivers build config 00:02:20.016 net/virtio: not in enabled drivers build config 00:02:20.016 net/vmxnet3: not in enabled drivers build config 00:02:20.016 raw/cnxk_bphy: not in enabled drivers build config 00:02:20.016 raw/cnxk_gpio: not in enabled drivers build config 00:02:20.016 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:20.016 raw/ifpga: not in enabled drivers build config 00:02:20.016 raw/ntb: not in enabled drivers build config 00:02:20.016 raw/skeleton: not in enabled drivers build config 00:02:20.016 crypto/armv8: not in enabled drivers build config 00:02:20.016 crypto/bcmfs: not in enabled drivers build config 00:02:20.016 crypto/caam_jr: not in enabled drivers build config 00:02:20.016 crypto/ccp: not in enabled drivers build config 00:02:20.016 crypto/cnxk: not in enabled drivers build config 00:02:20.016 crypto/dpaa_sec: not in enabled drivers build config 00:02:20.016 crypto/dpaa2_sec: not in enabled drivers build config 00:02:20.016 crypto/ipsec_mb: not in enabled drivers build config 00:02:20.016 crypto/mlx5: not in enabled drivers build config 00:02:20.016 crypto/mvsam: not in enabled drivers build config 00:02:20.016 crypto/nitrox: not in enabled drivers build config 00:02:20.016 crypto/null: not in enabled drivers build config 00:02:20.016 crypto/octeontx: not in enabled drivers build config 00:02:20.016 crypto/openssl: not in enabled drivers build config 00:02:20.016 crypto/scheduler: not in enabled drivers build config 00:02:20.016 crypto/uadk: not in enabled drivers build config 00:02:20.016 crypto/virtio: not in enabled drivers build config 00:02:20.016 compress/isal: not in enabled drivers build config 00:02:20.016 compress/mlx5: not in enabled drivers build config 00:02:20.016 compress/octeontx: not in enabled drivers build config 00:02:20.017 compress/zlib: not in enabled drivers build config 00:02:20.017 regex/mlx5: not in enabled drivers build config 00:02:20.017 regex/cn9k: not in enabled drivers build config 00:02:20.017 ml/cnxk: not in enabled drivers build config 00:02:20.017 vdpa/ifc: not in enabled drivers build config 00:02:20.017 vdpa/mlx5: not in enabled drivers build config 00:02:20.017 vdpa/nfp: not in enabled drivers build config 00:02:20.017 vdpa/sfc: not in enabled drivers build config 00:02:20.017 event/cnxk: not in enabled drivers build config 00:02:20.017 event/dlb2: not in enabled drivers build config 00:02:20.017 event/dpaa: not in enabled drivers build config 00:02:20.017 event/dpaa2: not in enabled drivers build config 00:02:20.017 event/dsw: not in enabled drivers build config 00:02:20.017 event/opdl: not in enabled drivers build config 00:02:20.017 event/skeleton: not in enabled drivers build config 00:02:20.017 event/sw: not in enabled drivers build config 00:02:20.017 event/octeontx: not in enabled drivers build config 00:02:20.017 baseband/acc: not in enabled drivers build config 00:02:20.017 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:20.017 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:20.017 baseband/la12xx: not in enabled drivers build config 00:02:20.017 baseband/null: not in enabled drivers build config 00:02:20.017 baseband/turbo_sw: not in enabled drivers build config 00:02:20.017 gpu/cuda: not in enabled drivers build config 00:02:20.017 00:02:20.017 00:02:20.017 Build targets in project: 220 00:02:20.017 00:02:20.017 DPDK 23.11.0 00:02:20.017 00:02:20.017 User defined options 00:02:20.017 libdir : lib 00:02:20.017 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:20.017 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:20.017 c_link_args : 00:02:20.017 enable_docs : false 00:02:20.017 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:20.017 enable_kmods : false 00:02:20.017 machine : native 00:02:20.017 tests : false 00:02:20.017 00:02:20.017 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:20.017 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:20.017 10:51:25 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:20.017 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:20.304 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:20.304 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:20.304 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:20.304 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:20.304 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:20.304 [6/710] Linking static target lib/librte_kvargs.a 00:02:20.304 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:20.304 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:20.563 [9/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:20.563 [10/710] Linking static target lib/librte_log.a 00:02:20.563 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.822 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:20.822 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:20.822 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:20.822 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.822 [16/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:20.822 [17/710] Linking target lib/librte_log.so.24.0 00:02:21.081 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:21.081 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:21.081 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:21.340 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:21.340 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:21.340 [23/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:21.340 [24/710] Linking target lib/librte_kvargs.so.24.0 00:02:21.340 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:21.340 [26/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:21.599 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:21.599 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:21.599 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:21.599 [30/710] Linking static target lib/librte_telemetry.a 00:02:21.599 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:21.599 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:21.857 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:21.857 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:21.857 [35/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.857 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:22.115 [37/710] Linking target lib/librte_telemetry.so.24.0 00:02:22.115 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:22.116 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.116 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:22.116 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:22.116 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.116 [43/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:22.116 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.374 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:22.374 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:22.374 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:22.633 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:22.633 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:22.633 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:22.633 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:22.891 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:22.891 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:22.891 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:22.891 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.150 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:23.150 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:23.150 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:23.150 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.150 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:23.150 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:23.150 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.150 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:23.409 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.409 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.409 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.667 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.667 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.667 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:23.925 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.925 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.925 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:23.925 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.925 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.925 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.925 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.183 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.183 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.183 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.440 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.440 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.440 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.440 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.698 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.698 [85/710] Linking static target lib/librte_ring.a 00:02:24.698 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.698 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.698 [88/710] Linking static target lib/librte_eal.a 00:02:24.955 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.955 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.955 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.955 [92/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:25.213 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:25.213 [94/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:25.213 [95/710] Linking static target lib/librte_mempool.a 00:02:25.213 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:25.213 [97/710] Linking static target lib/librte_rcu.a 00:02:25.471 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:25.471 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:25.471 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.730 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:25.730 [102/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:25.730 [103/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:25.730 [104/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:25.730 [105/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.730 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:25.988 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.988 [108/710] Linking static target lib/librte_mbuf.a 00:02:25.988 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:25.988 [110/710] Linking static target lib/librte_net.a 00:02:26.245 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:26.246 [112/710] Linking static target lib/librte_meter.a 00:02:26.246 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.503 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:26.503 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.503 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:26.503 [117/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.503 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:26.503 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:27.076 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.335 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.335 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:27.592 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:27.592 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:27.592 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:27.592 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:27.592 [127/710] Linking static target lib/librte_pci.a 00:02:27.592 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:27.851 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:27.851 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.851 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:27.851 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:27.851 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:27.851 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:28.109 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:28.109 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:28.109 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:28.109 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:28.109 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:28.109 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:28.367 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:28.367 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:28.367 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:28.367 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:28.625 [145/710] Linking static target lib/librte_cmdline.a 00:02:28.625 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:28.625 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:28.625 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:28.625 [149/710] Linking static target lib/librte_metrics.a 00:02:28.883 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:29.141 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.400 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.400 [153/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:29.400 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:29.400 [155/710] Linking static target lib/librte_timer.a 00:02:29.658 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.917 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:29.917 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:30.174 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:30.174 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:30.738 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:30.738 [162/710] Linking static target lib/librte_ethdev.a 00:02:30.996 [163/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:30.996 [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:30.996 [165/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:30.996 [166/710] Linking static target lib/librte_bitratestats.a 00:02:30.996 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:30.996 [168/710] Linking static target lib/librte_bbdev.a 00:02:31.253 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.253 [170/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:31.253 [171/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.253 [172/710] Linking static target lib/librte_hash.a 00:02:31.253 [173/710] Linking target lib/librte_eal.so.24.0 00:02:31.511 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:31.511 [175/710] Linking target lib/librte_ring.so.24.0 00:02:31.511 [176/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:31.511 [177/710] Linking target lib/librte_meter.so.24.0 00:02:31.511 [178/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:31.511 [179/710] Linking target lib/librte_pci.so.24.0 00:02:31.769 [180/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:31.769 [181/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.769 [182/710] Linking target lib/librte_rcu.so.24.0 00:02:31.769 [183/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:31.769 [184/710] Linking target lib/librte_mempool.so.24.0 00:02:31.769 [185/710] Linking target lib/librte_timer.so.24.0 00:02:31.769 [186/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:31.769 [187/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:31.769 [188/710] Linking static target lib/acl/libavx2_tmp.a 00:02:31.769 [189/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.769 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:31.769 [191/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:31.769 [192/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:31.769 [193/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:31.769 [194/710] Linking static target lib/acl/libavx512_tmp.a 00:02:31.769 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:32.026 [196/710] Linking target lib/librte_mbuf.so.24.0 00:02:32.026 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:32.026 [198/710] Linking target lib/librte_net.so.24.0 00:02:32.026 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:32.284 [200/710] Linking static target lib/librte_acl.a 00:02:32.284 [201/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:32.284 [202/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:32.284 [203/710] Linking target lib/librte_bbdev.so.24.0 00:02:32.284 [204/710] Linking target lib/librte_cmdline.so.24.0 00:02:32.284 [205/710] Linking static target lib/librte_cfgfile.a 00:02:32.284 [206/710] Linking target lib/librte_hash.so.24.0 00:02:32.284 [207/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:32.284 [208/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:32.585 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.585 [210/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:32.585 [211/710] Linking target lib/librte_acl.so.24.0 00:02:32.585 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.585 [213/710] Linking target lib/librte_cfgfile.so.24.0 00:02:32.585 [214/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:32.853 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:32.853 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:33.111 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:33.111 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:33.111 [219/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:33.111 [220/710] Linking static target lib/librte_bpf.a 00:02:33.370 [221/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:33.370 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:33.370 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:33.370 [224/710] Linking static target lib/librte_compressdev.a 00:02:33.629 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:33.629 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.629 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:33.888 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:33.888 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:33.888 [230/710] Linking static target lib/librte_distributor.a 00:02:33.888 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.888 [232/710] Linking target lib/librte_compressdev.so.24.0 00:02:33.888 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:34.147 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.147 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:34.147 [236/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:34.147 [237/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:34.147 [238/710] Linking static target lib/librte_dmadev.a 00:02:34.712 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.712 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:34.712 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:34.712 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:34.970 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:35.229 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:35.229 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:35.229 [246/710] Linking static target lib/librte_efd.a 00:02:35.229 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:35.229 [248/710] Linking static target lib/librte_cryptodev.a 00:02:35.488 [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.488 [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:35.488 [251/710] Linking target lib/librte_efd.so.24.0 00:02:35.746 [252/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:35.746 [253/710] Linking static target lib/librte_dispatcher.a 00:02:35.746 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:36.005 [255/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.005 [256/710] Linking target lib/librte_ethdev.so.24.0 00:02:36.005 [257/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:36.005 [258/710] Linking static target lib/librte_gpudev.a 00:02:36.005 [259/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:36.263 [260/710] Linking target lib/librte_metrics.so.24.0 00:02:36.263 [261/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.263 [262/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:36.263 [263/710] Linking target lib/librte_bpf.so.24.0 00:02:36.263 [264/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:36.263 [265/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:36.263 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:36.263 [267/710] Linking target lib/librte_bitratestats.so.24.0 00:02:36.263 [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:36.522 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:36.522 [270/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.780 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:02:36.780 [272/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:36.780 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:36.780 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.039 [275/710] Linking target lib/librte_gpudev.so.24.0 00:02:37.039 [276/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:37.039 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:37.039 [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:37.039 [279/710] Linking static target lib/librte_eventdev.a 00:02:37.039 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:37.297 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:37.297 [282/710] Linking static target lib/librte_gro.a 00:02:37.297 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:37.297 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:37.297 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:37.297 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:37.555 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.555 [288/710] Linking target lib/librte_gro.so.24.0 00:02:37.555 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:37.555 [290/710] Linking static target lib/librte_gso.a 00:02:37.814 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.814 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:37.814 [293/710] Linking target lib/librte_gso.so.24.0 00:02:37.814 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:37.814 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:38.072 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:38.072 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:38.072 [298/710] Linking static target lib/librte_jobstats.a 00:02:38.072 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:38.330 [300/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:38.330 [301/710] Linking static target lib/librte_latencystats.a 00:02:38.330 [302/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:38.330 [303/710] Linking static target lib/librte_ip_frag.a 00:02:38.330 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.330 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:38.591 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.591 [307/710] Linking target lib/librte_latencystats.so.24.0 00:02:38.591 [308/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.591 [309/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:38.591 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:38.591 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:38.591 [312/710] Linking target lib/librte_ip_frag.so.24.0 00:02:38.591 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:38.848 [314/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:38.848 [315/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:38.848 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:38.848 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:39.105 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.364 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:39.364 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:39.364 [321/710] Linking static target lib/librte_lpm.a 00:02:39.364 [322/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:39.364 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:39.364 [324/710] Linking target lib/librte_dispatcher.so.24.0 00:02:39.364 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:39.623 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:39.623 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:39.623 [328/710] Linking static target lib/librte_pcapng.a 00:02:39.623 [329/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:39.623 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.623 [331/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:39.623 [332/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:39.623 [333/710] Linking target lib/librte_lpm.so.24.0 00:02:39.881 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:39.881 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.881 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:39.881 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:39.881 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:40.139 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:40.139 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:40.397 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:40.397 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:40.397 [343/710] Linking static target lib/librte_power.a 00:02:40.397 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:40.397 [345/710] Linking static target lib/librte_regexdev.a 00:02:40.397 [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:40.397 [347/710] Linking static target lib/librte_rawdev.a 00:02:40.655 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:40.655 [349/710] Linking static target lib/librte_member.a 00:02:40.655 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:40.655 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:40.655 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:40.913 [353/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:40.913 [354/710] Linking static target lib/librte_mldev.a 00:02:40.913 [355/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.913 [356/710] Linking target lib/librte_member.so.24.0 00:02:40.913 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.913 [358/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.913 [359/710] Linking target lib/librte_rawdev.so.24.0 00:02:40.913 [360/710] Linking target lib/librte_power.so.24.0 00:02:41.171 [361/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:41.171 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:41.171 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.171 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:41.171 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:41.430 [366/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:41.430 [367/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:41.430 [368/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:41.430 [369/710] Linking static target lib/librte_rib.a 00:02:41.430 [370/710] Linking static target lib/librte_reorder.a 00:02:41.688 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:41.688 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:41.688 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:41.688 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:41.688 [375/710] Linking static target lib/librte_stack.a 00:02:41.946 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.946 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:41.946 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:41.946 [379/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.946 [380/710] Linking static target lib/librte_security.a 00:02:41.946 [381/710] Linking target lib/librte_rib.so.24.0 00:02:41.946 [382/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:41.946 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.946 [384/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.946 [385/710] Linking target lib/librte_stack.so.24.0 00:02:42.204 [386/710] Linking target lib/librte_mldev.so.24.0 00:02:42.205 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:42.205 [388/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:42.463 [389/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.463 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:42.463 [391/710] Linking target lib/librte_security.so.24.0 00:02:42.463 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:42.463 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:42.721 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:42.721 [395/710] Linking static target lib/librte_sched.a 00:02:42.979 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:43.237 [397/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:43.237 [398/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.237 [399/710] Linking target lib/librte_sched.so.24.0 00:02:43.237 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:43.237 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:43.495 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:43.495 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:43.753 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:44.047 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:44.047 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:44.047 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:44.313 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:44.313 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:44.313 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:44.577 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:44.577 [412/710] Linking static target lib/librte_ipsec.a 00:02:44.577 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:44.835 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:44.835 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:44.835 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.835 [417/710] Linking target lib/librte_ipsec.so.24.0 00:02:44.835 [418/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:44.835 [419/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:45.093 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:45.093 [421/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:45.093 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:45.093 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:46.028 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:46.028 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:46.028 [426/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:46.028 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:46.028 [428/710] Linking static target lib/librte_pdcp.a 00:02:46.028 [429/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:46.028 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:46.028 [431/710] Linking static target lib/librte_fib.a 00:02:46.028 [432/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:46.285 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.285 [434/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.285 [435/710] Linking target lib/librte_pdcp.so.24.0 00:02:46.285 [436/710] Linking target lib/librte_fib.so.24.0 00:02:46.543 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:47.109 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:47.109 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:47.109 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:47.109 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:47.109 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:47.367 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:47.367 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:47.626 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:47.626 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:47.626 [447/710] Linking static target lib/librte_port.a 00:02:47.885 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:47.885 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:47.885 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:48.144 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:48.144 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.144 [453/710] Linking target lib/librte_port.so.24.0 00:02:48.144 [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:48.144 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:48.403 [456/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:48.403 [457/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:48.403 [458/710] Linking static target lib/librte_pdump.a 00:02:48.403 [459/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:48.661 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.661 [461/710] Linking target lib/librte_pdump.so.24.0 00:02:48.661 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:48.920 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:49.178 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:49.178 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:49.178 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:49.178 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:49.178 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:49.436 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:49.695 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:49.695 [471/710] Linking static target lib/librte_table.a 00:02:49.695 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:49.695 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:50.261 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:50.261 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.520 [476/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:50.520 [477/710] Linking target lib/librte_table.so.24.0 00:02:50.520 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:50.779 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:50.779 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:51.038 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:51.038 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:51.296 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:51.296 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:51.296 [485/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:51.296 [486/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:51.863 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:51.863 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:51.863 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:51.863 [490/710] Linking static target lib/librte_graph.a 00:02:52.122 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:52.122 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:52.122 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:52.688 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.688 [495/710] Linking target lib/librte_graph.so.24.0 00:02:52.688 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:52.688 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:52.689 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:52.689 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:53.255 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:53.255 [501/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:53.255 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:53.255 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:53.255 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:53.513 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:53.513 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:53.771 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:53.771 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:54.029 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.029 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.029 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.291 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.291 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:54.291 [514/710] Linking static target lib/librte_node.a 00:02:54.291 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.550 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.550 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:54.550 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.550 [519/710] Linking target lib/librte_node.so.24.0 00:02:54.550 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.807 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.807 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:54.807 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.807 [524/710] Linking static target drivers/librte_bus_vdev.a 00:02:54.807 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.807 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.066 [527/710] Linking static target drivers/librte_bus_pci.a 00:02:55.066 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.066 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.066 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.066 [531/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:55.324 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:55.324 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:55.324 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:55.324 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:55.324 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:55.324 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.324 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.583 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:55.583 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.584 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.584 [542/710] Linking static target drivers/librte_mempool_ring.a 00:02:55.584 [543/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:55.584 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.584 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:55.842 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:56.099 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:56.358 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:56.617 [549/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:56.617 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:56.617 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:57.549 [552/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:57.549 [553/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:57.549 [554/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:57.549 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:57.549 [556/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:57.549 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:58.112 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:58.112 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:58.369 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:58.369 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:58.627 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:58.885 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:59.143 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:59.143 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:59.143 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:59.707 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:59.707 [568/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:59.965 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:59.965 [570/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:59.965 [571/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:59.965 [572/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:59.965 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:00.222 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:00.480 [575/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:00.480 [576/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:00.480 [577/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:00.738 [578/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:00.738 [579/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.738 [580/710] Linking static target lib/librte_vhost.a 00:03:00.738 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:00.738 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:01.304 [583/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:01.304 [584/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:01.304 [585/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.304 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:01.304 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.304 [588/710] Linking static target drivers/librte_net_i40e.a 00:03:01.304 [589/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:01.304 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:01.304 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:01.304 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:01.870 [593/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:01.870 [594/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.870 [595/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.870 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:02.128 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:02.128 [598/710] Linking target lib/librte_vhost.so.24.0 00:03:02.128 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:02.386 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:02.644 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:02.644 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:02.644 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:02.644 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:02.903 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:02.903 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:03.162 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:03.420 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:03.420 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:03.678 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:03.678 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:03.678 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:03.935 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:03.936 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:03.936 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:03.936 [616/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:03.936 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:04.193 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:04.451 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:04.451 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:04.708 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:04.708 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:04.966 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:05.902 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:05.902 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:05.902 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:05.902 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:05.902 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:06.160 [629/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:06.160 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:06.160 [631/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:06.418 [632/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.418 [633/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:06.418 [634/710] Linking static target lib/librte_pipeline.a 00:03:06.418 [635/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:06.418 [636/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:06.676 [637/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:06.676 [638/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:06.676 [639/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:06.935 [640/710] Linking target app/dpdk-graph 00:03:06.935 [641/710] Linking target app/dpdk-dumpcap 00:03:07.193 [642/710] Linking target app/dpdk-pdump 00:03:07.193 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:07.193 [644/710] Linking target app/dpdk-proc-info 00:03:07.193 [645/710] Linking target app/dpdk-test-acl 00:03:07.193 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:07.452 [647/710] Linking target app/dpdk-test-cmdline 00:03:07.452 [648/710] Linking target app/dpdk-test-compress-perf 00:03:07.452 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:07.452 [650/710] Linking target app/dpdk-test-crypto-perf 00:03:07.452 [651/710] Linking target app/dpdk-test-dma-perf 00:03:07.711 [652/710] Linking target app/dpdk-test-fib 00:03:07.711 [653/710] Linking target app/dpdk-test-gpudev 00:03:07.711 [654/710] Linking target app/dpdk-test-flow-perf 00:03:07.969 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:07.969 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:07.969 [657/710] Linking target app/dpdk-test-eventdev 00:03:08.228 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:08.228 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:08.487 [660/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:08.487 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:08.487 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:08.746 [663/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:08.746 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:08.746 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:08.746 [666/710] Linking target app/dpdk-test-bbdev 00:03:09.005 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:09.005 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:09.264 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:09.264 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:09.264 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:09.264 [672/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.523 [673/710] Linking target lib/librte_pipeline.so.24.0 00:03:09.523 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:09.781 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:09.781 [676/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:09.781 [677/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:10.040 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:10.040 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:10.298 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:10.298 [681/710] Linking target app/dpdk-test-mldev 00:03:10.298 [682/710] Linking target app/dpdk-test-pipeline 00:03:10.557 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:11.125 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:11.125 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:11.125 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:11.125 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:11.125 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:11.384 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:11.384 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:11.642 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:11.642 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:11.642 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:12.208 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:12.467 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:12.467 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:12.726 [697/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:12.726 [698/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:12.984 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:12.984 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:12.984 [701/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:13.263 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:13.263 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:13.263 [704/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:13.263 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:13.263 [706/710] Linking target app/dpdk-test-regex 00:03:13.539 [707/710] Linking target app/dpdk-test-sad 00:03:13.797 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:13.797 [709/710] Linking target app/dpdk-testpmd 00:03:14.361 [710/710] Linking target app/dpdk-test-security-perf 00:03:14.361 10:52:19 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:14.361 10:52:19 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:14.361 10:52:19 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:14.361 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:14.361 [0/1] Installing files. 00:03:14.929 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.929 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:14.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:14.932 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.932 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.933 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.192 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.192 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.192 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.192 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:15.192 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.192 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:15.192 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.192 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:15.192 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.192 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:15.192 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.192 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.193 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.453 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.454 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:15.455 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:15.455 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:15.455 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:15.455 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:15.455 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:15.455 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:15.455 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:15.455 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:15.455 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:15.455 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:15.455 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:15.455 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:15.455 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:15.455 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:15.455 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:15.455 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:15.455 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:15.455 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:15.455 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:15.455 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:15.455 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:15.455 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:15.455 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:15.455 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:15.455 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:15.455 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:15.455 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:15.455 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:15.455 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:15.455 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:15.455 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:15.455 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:15.455 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:15.455 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:15.455 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:15.455 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:15.455 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:15.455 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:15.455 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:15.455 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:15.455 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:15.455 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:15.455 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:15.455 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:15.455 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:15.455 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:15.455 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:15.455 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:15.455 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:15.455 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:15.455 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:15.455 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:15.455 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:15.455 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:15.455 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:15.455 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:15.455 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:15.455 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:15.456 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:15.456 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:15.456 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:15.456 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:15.456 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:15.456 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:15.456 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:15.456 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:15.456 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:15.456 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:15.456 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:15.456 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:15.456 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:15.456 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:15.456 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:15.456 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:15.456 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:15.456 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:15.456 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:15.456 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:15.456 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:15.456 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:15.456 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:15.456 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:15.456 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:15.456 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:15.456 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:15.456 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:15.456 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:15.456 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:15.456 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:15.456 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:15.456 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:15.456 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:15.456 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:15.456 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:15.456 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:15.456 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:15.456 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:15.456 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:15.456 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:15.456 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:15.456 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:15.456 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:15.456 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:15.456 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:15.456 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:15.456 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:15.456 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:15.456 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:15.456 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:15.456 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:15.456 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:15.456 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:15.456 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:15.456 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:15.456 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:15.456 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:15.456 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:15.456 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:15.456 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:15.456 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:15.456 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:15.456 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:15.456 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:15.456 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:15.456 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:15.456 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:15.456 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:15.456 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:15.456 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:15.456 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:15.456 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:15.456 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:15.456 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:15.456 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:15.456 10:52:20 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:15.456 ************************************ 00:03:15.456 END TEST build_native_dpdk 00:03:15.456 ************************************ 00:03:15.456 10:52:20 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:15.456 00:03:15.456 real 1m2.746s 00:03:15.456 user 7m41.697s 00:03:15.456 sys 1m5.331s 00:03:15.456 10:52:20 build_native_dpdk -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:15.456 10:52:20 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:15.456 10:52:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:15.456 10:52:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:15.456 10:52:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:15.456 10:52:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:15.456 10:52:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:15.456 10:52:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:15.456 10:52:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:15.456 10:52:20 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:15.715 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:15.715 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.715 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:15.715 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:15.973 Using 'verbs' RDMA provider 00:03:29.561 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:44.457 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:44.457 Creating mk/config.mk...done. 00:03:44.457 Creating mk/cc.flags.mk...done. 00:03:44.457 Type 'make' to build. 00:03:44.457 10:52:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:44.457 10:52:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:44.457 10:52:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:44.457 10:52:48 -- common/autotest_common.sh@10 -- $ set +x 00:03:44.457 ************************************ 00:03:44.457 START TEST make 00:03:44.457 ************************************ 00:03:44.457 10:52:48 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:44.457 make[1]: Nothing to be done for 'all'. 00:03:44.715 The Meson build system 00:03:44.715 Version: 1.5.0 00:03:44.715 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:03:44.715 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:44.715 Build type: native build 00:03:44.715 Project name: libvfio-user 00:03:44.715 Project version: 0.0.1 00:03:44.715 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:44.715 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:44.715 Host machine cpu family: x86_64 00:03:44.715 Host machine cpu: x86_64 00:03:44.715 Run-time dependency threads found: YES 00:03:44.715 Library dl found: YES 00:03:44.715 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:44.715 Run-time dependency json-c found: YES 0.17 00:03:44.715 Run-time dependency cmocka found: YES 1.1.7 00:03:44.715 Program pytest-3 found: NO 00:03:44.715 Program flake8 found: NO 00:03:44.715 Program misspell-fixer found: NO 00:03:44.715 Program restructuredtext-lint found: NO 00:03:44.715 Program valgrind found: YES (/usr/bin/valgrind) 00:03:44.715 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:44.715 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:44.715 Compiler for C supports arguments -Wwrite-strings: YES 00:03:44.715 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:44.715 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:03:44.715 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:03:44.715 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:44.715 Build targets in project: 8 00:03:44.715 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:44.715 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:44.715 00:03:44.715 libvfio-user 0.0.1 00:03:44.715 00:03:44.715 User defined options 00:03:44.715 buildtype : debug 00:03:44.715 default_library: shared 00:03:44.715 libdir : /usr/local/lib 00:03:44.715 00:03:44.715 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:45.284 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:45.284 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:45.284 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:45.284 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:45.284 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:45.284 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:45.284 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:45.284 [7/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:45.543 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:45.543 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:45.543 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:45.543 [11/37] Compiling C object samples/null.p/null.c.o 00:03:45.543 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:45.543 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:45.543 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:45.543 [15/37] Compiling C object samples/client.p/client.c.o 00:03:45.543 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:45.543 [17/37] Compiling C object samples/server.p/server.c.o 00:03:45.543 [18/37] Linking target samples/client 00:03:45.543 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:45.543 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:45.543 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:45.802 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:45.802 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:45.802 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:45.802 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:45.802 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:45.802 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:45.802 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:45.802 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:45.802 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:45.802 [31/37] Linking target test/unit_tests 00:03:45.802 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:46.060 [33/37] Linking target samples/gpio-pci-idio-16 00:03:46.060 [34/37] Linking target samples/lspci 00:03:46.060 [35/37] Linking target samples/null 00:03:46.060 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:46.060 [37/37] Linking target samples/server 00:03:46.060 INFO: autodetecting backend as ninja 00:03:46.060 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:46.060 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:46.319 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:46.319 ninja: no work to do. 00:04:42.534 CC lib/ut_mock/mock.o 00:04:42.534 CC lib/ut/ut.o 00:04:42.534 CC lib/log/log.o 00:04:42.534 CC lib/log/log_flags.o 00:04:42.534 CC lib/log/log_deprecated.o 00:04:42.534 LIB libspdk_ut_mock.a 00:04:42.534 LIB libspdk_ut.a 00:04:42.534 LIB libspdk_log.a 00:04:42.534 SO libspdk_ut_mock.so.6.0 00:04:42.534 SO libspdk_ut.so.2.0 00:04:42.534 SO libspdk_log.so.7.1 00:04:42.534 SYMLINK libspdk_ut.so 00:04:42.534 SYMLINK libspdk_ut_mock.so 00:04:42.534 SYMLINK libspdk_log.so 00:04:42.534 CC lib/ioat/ioat.o 00:04:42.534 CC lib/util/base64.o 00:04:42.534 CC lib/util/cpuset.o 00:04:42.534 CC lib/util/bit_array.o 00:04:42.534 CC lib/util/crc32.o 00:04:42.534 CC lib/util/crc32c.o 00:04:42.534 CC lib/util/crc16.o 00:04:42.534 CXX lib/trace_parser/trace.o 00:04:42.534 CC lib/dma/dma.o 00:04:42.534 CC lib/vfio_user/host/vfio_user_pci.o 00:04:42.534 CC lib/vfio_user/host/vfio_user.o 00:04:42.534 CC lib/util/crc32_ieee.o 00:04:42.534 CC lib/util/crc64.o 00:04:42.534 CC lib/util/dif.o 00:04:42.534 LIB libspdk_dma.a 00:04:42.534 CC lib/util/fd.o 00:04:42.534 SO libspdk_dma.so.5.0 00:04:42.534 CC lib/util/fd_group.o 00:04:42.534 CC lib/util/file.o 00:04:42.534 SYMLINK libspdk_dma.so 00:04:42.534 CC lib/util/hexlify.o 00:04:42.534 CC lib/util/iov.o 00:04:42.534 CC lib/util/math.o 00:04:42.534 LIB libspdk_ioat.a 00:04:42.534 SO libspdk_ioat.so.7.0 00:04:42.534 CC lib/util/net.o 00:04:42.534 SYMLINK libspdk_ioat.so 00:04:42.534 CC lib/util/pipe.o 00:04:42.534 LIB libspdk_vfio_user.a 00:04:42.534 SO libspdk_vfio_user.so.5.0 00:04:42.534 CC lib/util/strerror_tls.o 00:04:42.534 CC lib/util/string.o 00:04:42.534 SYMLINK libspdk_vfio_user.so 00:04:42.534 CC lib/util/uuid.o 00:04:42.534 CC lib/util/xor.o 00:04:42.534 CC lib/util/zipf.o 00:04:42.534 CC lib/util/md5.o 00:04:42.534 LIB libspdk_util.a 00:04:42.534 SO libspdk_util.so.10.0 00:04:42.534 SYMLINK libspdk_util.so 00:04:42.534 LIB libspdk_trace_parser.a 00:04:42.534 SO libspdk_trace_parser.so.6.0 00:04:42.534 SYMLINK libspdk_trace_parser.so 00:04:42.534 CC lib/vmd/vmd.o 00:04:42.534 CC lib/vmd/led.o 00:04:42.534 CC lib/rdma_provider/common.o 00:04:42.534 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:42.534 CC lib/rdma_utils/rdma_utils.o 00:04:42.534 CC lib/env_dpdk/env.o 00:04:42.534 CC lib/idxd/idxd.o 00:04:42.534 CC lib/env_dpdk/memory.o 00:04:42.534 CC lib/conf/conf.o 00:04:42.534 CC lib/json/json_parse.o 00:04:42.534 CC lib/json/json_util.o 00:04:42.534 CC lib/json/json_write.o 00:04:42.534 LIB libspdk_rdma_provider.a 00:04:42.534 LIB libspdk_conf.a 00:04:42.534 SO libspdk_rdma_provider.so.6.0 00:04:42.534 SO libspdk_conf.so.6.0 00:04:42.534 CC lib/idxd/idxd_user.o 00:04:42.534 LIB libspdk_rdma_utils.a 00:04:42.534 SO libspdk_rdma_utils.so.1.0 00:04:42.534 SYMLINK libspdk_conf.so 00:04:42.534 SYMLINK libspdk_rdma_provider.so 00:04:42.534 CC lib/env_dpdk/pci.o 00:04:42.534 CC lib/env_dpdk/init.o 00:04:42.534 CC lib/idxd/idxd_kernel.o 00:04:42.534 SYMLINK libspdk_rdma_utils.so 00:04:42.534 CC lib/env_dpdk/threads.o 00:04:42.534 LIB libspdk_json.a 00:04:42.534 CC lib/env_dpdk/pci_ioat.o 00:04:42.534 SO libspdk_json.so.6.0 00:04:42.534 CC lib/env_dpdk/pci_virtio.o 00:04:42.534 LIB libspdk_idxd.a 00:04:42.534 CC lib/env_dpdk/pci_vmd.o 00:04:42.534 SYMLINK libspdk_json.so 00:04:42.534 SO libspdk_idxd.so.12.1 00:04:42.534 CC lib/env_dpdk/pci_idxd.o 00:04:42.534 SYMLINK libspdk_idxd.so 00:04:42.534 CC lib/env_dpdk/pci_event.o 00:04:42.534 LIB libspdk_vmd.a 00:04:42.534 CC lib/env_dpdk/sigbus_handler.o 00:04:42.534 CC lib/env_dpdk/pci_dpdk.o 00:04:42.534 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:42.534 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:42.534 SO libspdk_vmd.so.6.0 00:04:42.534 CC lib/jsonrpc/jsonrpc_server.o 00:04:42.534 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:42.534 SYMLINK libspdk_vmd.so 00:04:42.534 CC lib/jsonrpc/jsonrpc_client.o 00:04:42.534 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:42.534 LIB libspdk_jsonrpc.a 00:04:42.534 SO libspdk_jsonrpc.so.6.0 00:04:42.534 SYMLINK libspdk_jsonrpc.so 00:04:42.534 LIB libspdk_env_dpdk.a 00:04:42.534 CC lib/rpc/rpc.o 00:04:42.534 SO libspdk_env_dpdk.so.15.1 00:04:42.534 SYMLINK libspdk_env_dpdk.so 00:04:42.534 LIB libspdk_rpc.a 00:04:42.534 SO libspdk_rpc.so.6.0 00:04:42.534 SYMLINK libspdk_rpc.so 00:04:42.534 CC lib/trace/trace.o 00:04:42.534 CC lib/trace/trace_flags.o 00:04:42.534 CC lib/trace/trace_rpc.o 00:04:42.534 CC lib/notify/notify_rpc.o 00:04:42.534 CC lib/notify/notify.o 00:04:42.534 CC lib/keyring/keyring.o 00:04:42.534 CC lib/keyring/keyring_rpc.o 00:04:42.534 LIB libspdk_notify.a 00:04:42.534 SO libspdk_notify.so.6.0 00:04:42.534 LIB libspdk_keyring.a 00:04:42.534 SO libspdk_keyring.so.2.0 00:04:42.534 SYMLINK libspdk_notify.so 00:04:42.534 LIB libspdk_trace.a 00:04:42.534 SO libspdk_trace.so.11.0 00:04:42.534 SYMLINK libspdk_keyring.so 00:04:42.534 SYMLINK libspdk_trace.so 00:04:42.534 CC lib/thread/iobuf.o 00:04:42.534 CC lib/thread/thread.o 00:04:42.534 CC lib/sock/sock.o 00:04:42.534 CC lib/sock/sock_rpc.o 00:04:42.534 LIB libspdk_sock.a 00:04:42.534 SO libspdk_sock.so.10.0 00:04:42.534 SYMLINK libspdk_sock.so 00:04:42.534 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:42.534 CC lib/nvme/nvme_ctrlr.o 00:04:42.534 CC lib/nvme/nvme_fabric.o 00:04:42.534 CC lib/nvme/nvme_ns_cmd.o 00:04:42.534 CC lib/nvme/nvme_ns.o 00:04:42.534 CC lib/nvme/nvme_pcie_common.o 00:04:42.534 CC lib/nvme/nvme_qpair.o 00:04:42.534 CC lib/nvme/nvme_pcie.o 00:04:42.534 CC lib/nvme/nvme.o 00:04:42.534 LIB libspdk_thread.a 00:04:42.534 SO libspdk_thread.so.11.0 00:04:42.534 CC lib/nvme/nvme_quirks.o 00:04:42.534 CC lib/nvme/nvme_transport.o 00:04:42.534 SYMLINK libspdk_thread.so 00:04:42.534 CC lib/nvme/nvme_discovery.o 00:04:42.534 CC lib/accel/accel.o 00:04:42.534 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:42.534 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:42.534 CC lib/blob/blobstore.o 00:04:42.534 CC lib/blob/request.o 00:04:42.534 CC lib/nvme/nvme_tcp.o 00:04:42.534 CC lib/blob/zeroes.o 00:04:42.534 CC lib/blob/blob_bs_dev.o 00:04:42.534 CC lib/nvme/nvme_opal.o 00:04:42.534 CC lib/nvme/nvme_io_msg.o 00:04:42.534 CC lib/nvme/nvme_poll_group.o 00:04:42.534 CC lib/nvme/nvme_zns.o 00:04:42.534 CC lib/nvme/nvme_stubs.o 00:04:42.534 CC lib/init/json_config.o 00:04:42.534 CC lib/init/subsystem.o 00:04:42.792 CC lib/accel/accel_rpc.o 00:04:42.792 CC lib/nvme/nvme_auth.o 00:04:42.792 CC lib/init/subsystem_rpc.o 00:04:42.792 CC lib/init/rpc.o 00:04:42.792 CC lib/nvme/nvme_cuse.o 00:04:42.792 CC lib/accel/accel_sw.o 00:04:42.792 CC lib/nvme/nvme_vfio_user.o 00:04:43.050 CC lib/nvme/nvme_rdma.o 00:04:43.051 LIB libspdk_init.a 00:04:43.051 CC lib/virtio/virtio.o 00:04:43.051 SO libspdk_init.so.6.0 00:04:43.309 CC lib/vfu_tgt/tgt_endpoint.o 00:04:43.309 SYMLINK libspdk_init.so 00:04:43.309 CC lib/vfu_tgt/tgt_rpc.o 00:04:43.309 LIB libspdk_accel.a 00:04:43.309 SO libspdk_accel.so.16.0 00:04:43.309 CC lib/virtio/virtio_vhost_user.o 00:04:43.309 SYMLINK libspdk_accel.so 00:04:43.309 CC lib/virtio/virtio_vfio_user.o 00:04:43.567 LIB libspdk_vfu_tgt.a 00:04:43.567 CC lib/fsdev/fsdev.o 00:04:43.567 CC lib/bdev/bdev.o 00:04:43.567 SO libspdk_vfu_tgt.so.3.0 00:04:43.567 CC lib/bdev/bdev_rpc.o 00:04:43.567 SYMLINK libspdk_vfu_tgt.so 00:04:43.567 CC lib/fsdev/fsdev_io.o 00:04:43.825 CC lib/virtio/virtio_pci.o 00:04:43.825 CC lib/fsdev/fsdev_rpc.o 00:04:43.825 CC lib/event/app.o 00:04:43.825 CC lib/event/reactor.o 00:04:43.825 CC lib/bdev/bdev_zone.o 00:04:43.825 CC lib/bdev/part.o 00:04:44.083 LIB libspdk_virtio.a 00:04:44.083 CC lib/event/log_rpc.o 00:04:44.083 SO libspdk_virtio.so.7.0 00:04:44.083 CC lib/event/app_rpc.o 00:04:44.083 SYMLINK libspdk_virtio.so 00:04:44.083 CC lib/bdev/scsi_nvme.o 00:04:44.083 LIB libspdk_fsdev.a 00:04:44.348 CC lib/event/scheduler_static.o 00:04:44.348 SO libspdk_fsdev.so.2.0 00:04:44.348 SYMLINK libspdk_fsdev.so 00:04:44.348 LIB libspdk_nvme.a 00:04:44.348 LIB libspdk_event.a 00:04:44.607 SO libspdk_event.so.14.0 00:04:44.607 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:44.607 SYMLINK libspdk_event.so 00:04:44.607 SO libspdk_nvme.so.14.1 00:04:44.865 SYMLINK libspdk_nvme.so 00:04:44.865 LIB libspdk_blob.a 00:04:44.865 SO libspdk_blob.so.11.0 00:04:45.122 SYMLINK libspdk_blob.so 00:04:45.122 CC lib/blobfs/blobfs.o 00:04:45.122 CC lib/blobfs/tree.o 00:04:45.122 CC lib/lvol/lvol.o 00:04:45.122 LIB libspdk_fuse_dispatcher.a 00:04:45.380 SO libspdk_fuse_dispatcher.so.1.0 00:04:45.380 SYMLINK libspdk_fuse_dispatcher.so 00:04:46.317 LIB libspdk_blobfs.a 00:04:46.317 SO libspdk_blobfs.so.10.0 00:04:46.317 LIB libspdk_lvol.a 00:04:46.317 SYMLINK libspdk_blobfs.so 00:04:46.317 SO libspdk_lvol.so.10.0 00:04:46.317 LIB libspdk_bdev.a 00:04:46.317 SYMLINK libspdk_lvol.so 00:04:46.317 SO libspdk_bdev.so.17.0 00:04:46.575 SYMLINK libspdk_bdev.so 00:04:46.575 CC lib/nbd/nbd.o 00:04:46.575 CC lib/nbd/nbd_rpc.o 00:04:46.575 CC lib/nvmf/ctrlr.o 00:04:46.575 CC lib/nvmf/ctrlr_discovery.o 00:04:46.575 CC lib/nvmf/subsystem.o 00:04:46.575 CC lib/nvmf/ctrlr_bdev.o 00:04:46.575 CC lib/nvmf/nvmf.o 00:04:46.575 CC lib/ftl/ftl_core.o 00:04:46.575 CC lib/scsi/dev.o 00:04:46.575 CC lib/ublk/ublk.o 00:04:46.833 CC lib/scsi/lun.o 00:04:47.092 CC lib/ftl/ftl_init.o 00:04:47.092 CC lib/scsi/port.o 00:04:47.092 LIB libspdk_nbd.a 00:04:47.092 SO libspdk_nbd.so.7.0 00:04:47.092 CC lib/scsi/scsi.o 00:04:47.092 SYMLINK libspdk_nbd.so 00:04:47.092 CC lib/scsi/scsi_bdev.o 00:04:47.351 CC lib/ftl/ftl_layout.o 00:04:47.351 CC lib/scsi/scsi_pr.o 00:04:47.351 CC lib/nvmf/nvmf_rpc.o 00:04:47.351 CC lib/nvmf/transport.o 00:04:47.351 CC lib/ublk/ublk_rpc.o 00:04:47.351 CC lib/nvmf/tcp.o 00:04:47.610 LIB libspdk_ublk.a 00:04:47.610 CC lib/ftl/ftl_debug.o 00:04:47.610 CC lib/scsi/scsi_rpc.o 00:04:47.610 CC lib/scsi/task.o 00:04:47.610 SO libspdk_ublk.so.3.0 00:04:47.610 SYMLINK libspdk_ublk.so 00:04:47.610 CC lib/nvmf/stubs.o 00:04:47.610 CC lib/ftl/ftl_io.o 00:04:47.610 CC lib/ftl/ftl_sb.o 00:04:47.868 LIB libspdk_scsi.a 00:04:47.868 CC lib/ftl/ftl_l2p.o 00:04:47.868 SO libspdk_scsi.so.9.0 00:04:47.868 CC lib/ftl/ftl_l2p_flat.o 00:04:47.868 CC lib/nvmf/mdns_server.o 00:04:47.868 CC lib/nvmf/vfio_user.o 00:04:47.868 SYMLINK libspdk_scsi.so 00:04:47.868 CC lib/nvmf/rdma.o 00:04:48.126 CC lib/nvmf/auth.o 00:04:48.126 CC lib/ftl/ftl_nv_cache.o 00:04:48.126 CC lib/ftl/ftl_band.o 00:04:48.126 CC lib/ftl/ftl_band_ops.o 00:04:48.386 CC lib/iscsi/conn.o 00:04:48.386 CC lib/iscsi/init_grp.o 00:04:48.386 CC lib/ftl/ftl_writer.o 00:04:48.651 CC lib/ftl/ftl_rq.o 00:04:48.651 CC lib/vhost/vhost.o 00:04:48.651 CC lib/vhost/vhost_rpc.o 00:04:48.651 CC lib/vhost/vhost_scsi.o 00:04:48.909 CC lib/vhost/vhost_blk.o 00:04:48.909 CC lib/vhost/rte_vhost_user.o 00:04:48.909 CC lib/iscsi/iscsi.o 00:04:49.167 CC lib/ftl/ftl_reloc.o 00:04:49.167 CC lib/iscsi/param.o 00:04:49.425 CC lib/iscsi/portal_grp.o 00:04:49.425 CC lib/ftl/ftl_l2p_cache.o 00:04:49.425 CC lib/ftl/ftl_p2l.o 00:04:49.425 CC lib/iscsi/tgt_node.o 00:04:49.684 CC lib/ftl/ftl_p2l_log.o 00:04:49.684 CC lib/iscsi/iscsi_subsystem.o 00:04:49.684 CC lib/iscsi/iscsi_rpc.o 00:04:49.684 CC lib/iscsi/task.o 00:04:49.942 CC lib/ftl/mngt/ftl_mngt.o 00:04:49.942 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:49.942 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:49.942 LIB libspdk_vhost.a 00:04:49.942 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:49.942 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:49.942 SO libspdk_vhost.so.8.0 00:04:50.201 LIB libspdk_nvmf.a 00:04:50.201 SYMLINK libspdk_vhost.so 00:04:50.201 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:50.201 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:50.201 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:50.201 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:50.201 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:50.201 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:50.201 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:50.201 SO libspdk_nvmf.so.20.0 00:04:50.201 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:50.460 CC lib/ftl/utils/ftl_conf.o 00:04:50.460 CC lib/ftl/utils/ftl_md.o 00:04:50.460 LIB libspdk_iscsi.a 00:04:50.460 CC lib/ftl/utils/ftl_mempool.o 00:04:50.460 CC lib/ftl/utils/ftl_bitmap.o 00:04:50.460 CC lib/ftl/utils/ftl_property.o 00:04:50.460 SYMLINK libspdk_nvmf.so 00:04:50.460 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:50.460 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:50.460 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:50.460 SO libspdk_iscsi.so.8.0 00:04:50.718 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:50.718 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:50.718 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:50.718 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:50.718 SYMLINK libspdk_iscsi.so 00:04:50.718 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:50.718 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:50.718 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:50.718 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:50.718 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:50.718 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:50.977 CC lib/ftl/base/ftl_base_dev.o 00:04:50.977 CC lib/ftl/base/ftl_base_bdev.o 00:04:50.977 CC lib/ftl/ftl_trace.o 00:04:51.235 LIB libspdk_ftl.a 00:04:51.494 SO libspdk_ftl.so.9.0 00:04:51.753 SYMLINK libspdk_ftl.so 00:04:52.011 CC module/env_dpdk/env_dpdk_rpc.o 00:04:52.011 CC module/vfu_device/vfu_virtio.o 00:04:52.269 CC module/sock/uring/uring.o 00:04:52.269 CC module/fsdev/aio/fsdev_aio.o 00:04:52.269 CC module/keyring/file/keyring.o 00:04:52.269 CC module/sock/posix/posix.o 00:04:52.270 CC module/accel/error/accel_error.o 00:04:52.270 CC module/blob/bdev/blob_bdev.o 00:04:52.270 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:52.270 CC module/accel/ioat/accel_ioat.o 00:04:52.270 LIB libspdk_env_dpdk_rpc.a 00:04:52.270 SO libspdk_env_dpdk_rpc.so.6.0 00:04:52.270 SYMLINK libspdk_env_dpdk_rpc.so 00:04:52.270 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:52.270 CC module/keyring/file/keyring_rpc.o 00:04:52.270 CC module/accel/error/accel_error_rpc.o 00:04:52.528 LIB libspdk_scheduler_dynamic.a 00:04:52.528 CC module/accel/ioat/accel_ioat_rpc.o 00:04:52.528 SO libspdk_scheduler_dynamic.so.4.0 00:04:52.528 LIB libspdk_keyring_file.a 00:04:52.528 CC module/fsdev/aio/linux_aio_mgr.o 00:04:52.528 LIB libspdk_blob_bdev.a 00:04:52.528 SO libspdk_keyring_file.so.2.0 00:04:52.528 SYMLINK libspdk_scheduler_dynamic.so 00:04:52.528 SO libspdk_blob_bdev.so.11.0 00:04:52.528 LIB libspdk_accel_ioat.a 00:04:52.528 SYMLINK libspdk_keyring_file.so 00:04:52.528 SYMLINK libspdk_blob_bdev.so 00:04:52.528 CC module/vfu_device/vfu_virtio_blk.o 00:04:52.528 SO libspdk_accel_ioat.so.6.0 00:04:52.528 LIB libspdk_accel_error.a 00:04:52.786 SO libspdk_accel_error.so.2.0 00:04:52.786 CC module/vfu_device/vfu_virtio_scsi.o 00:04:52.786 SYMLINK libspdk_accel_error.so 00:04:52.786 SYMLINK libspdk_accel_ioat.so 00:04:52.786 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:52.786 CC module/keyring/linux/keyring.o 00:04:52.786 LIB libspdk_fsdev_aio.a 00:04:52.787 SO libspdk_fsdev_aio.so.1.0 00:04:52.787 LIB libspdk_sock_uring.a 00:04:53.045 CC module/accel/dsa/accel_dsa.o 00:04:53.045 CC module/accel/iaa/accel_iaa.o 00:04:53.045 SO libspdk_sock_uring.so.5.0 00:04:53.045 LIB libspdk_sock_posix.a 00:04:53.045 CC module/accel/dsa/accel_dsa_rpc.o 00:04:53.045 CC module/scheduler/gscheduler/gscheduler.o 00:04:53.045 LIB libspdk_scheduler_dpdk_governor.a 00:04:53.045 SO libspdk_sock_posix.so.6.0 00:04:53.045 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:53.045 SYMLINK libspdk_sock_uring.so 00:04:53.045 CC module/keyring/linux/keyring_rpc.o 00:04:53.045 SYMLINK libspdk_fsdev_aio.so 00:04:53.045 CC module/accel/iaa/accel_iaa_rpc.o 00:04:53.045 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:53.045 CC module/vfu_device/vfu_virtio_rpc.o 00:04:53.045 SYMLINK libspdk_sock_posix.so 00:04:53.045 CC module/vfu_device/vfu_virtio_fs.o 00:04:53.045 LIB libspdk_scheduler_gscheduler.a 00:04:53.045 LIB libspdk_keyring_linux.a 00:04:53.045 SO libspdk_scheduler_gscheduler.so.4.0 00:04:53.304 SO libspdk_keyring_linux.so.1.0 00:04:53.304 LIB libspdk_accel_iaa.a 00:04:53.304 SYMLINK libspdk_scheduler_gscheduler.so 00:04:53.304 SO libspdk_accel_iaa.so.3.0 00:04:53.304 SYMLINK libspdk_keyring_linux.so 00:04:53.304 SYMLINK libspdk_accel_iaa.so 00:04:53.304 CC module/bdev/error/vbdev_error.o 00:04:53.304 CC module/bdev/delay/vbdev_delay.o 00:04:53.304 CC module/blobfs/bdev/blobfs_bdev.o 00:04:53.304 LIB libspdk_accel_dsa.a 00:04:53.304 CC module/bdev/gpt/gpt.o 00:04:53.304 LIB libspdk_vfu_device.a 00:04:53.304 SO libspdk_accel_dsa.so.5.0 00:04:53.562 CC module/bdev/lvol/vbdev_lvol.o 00:04:53.562 SO libspdk_vfu_device.so.3.0 00:04:53.562 CC module/bdev/malloc/bdev_malloc.o 00:04:53.562 CC module/bdev/null/bdev_null.o 00:04:53.562 SYMLINK libspdk_accel_dsa.so 00:04:53.562 CC module/bdev/null/bdev_null_rpc.o 00:04:53.562 CC module/bdev/nvme/bdev_nvme.o 00:04:53.562 SYMLINK libspdk_vfu_device.so 00:04:53.562 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:53.562 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:53.562 CC module/bdev/gpt/vbdev_gpt.o 00:04:53.562 CC module/bdev/error/vbdev_error_rpc.o 00:04:53.821 LIB libspdk_blobfs_bdev.a 00:04:53.821 LIB libspdk_bdev_delay.a 00:04:53.821 SO libspdk_blobfs_bdev.so.6.0 00:04:53.821 LIB libspdk_bdev_null.a 00:04:53.821 SO libspdk_bdev_delay.so.6.0 00:04:53.821 LIB libspdk_bdev_error.a 00:04:53.821 SO libspdk_bdev_null.so.6.0 00:04:53.821 SO libspdk_bdev_error.so.6.0 00:04:53.821 SYMLINK libspdk_blobfs_bdev.so 00:04:53.821 CC module/bdev/passthru/vbdev_passthru.o 00:04:53.821 SYMLINK libspdk_bdev_delay.so 00:04:53.821 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:53.821 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:53.821 SYMLINK libspdk_bdev_null.so 00:04:53.821 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:53.821 CC module/bdev/nvme/nvme_rpc.o 00:04:53.821 LIB libspdk_bdev_gpt.a 00:04:53.821 SYMLINK libspdk_bdev_error.so 00:04:53.821 CC module/bdev/raid/bdev_raid.o 00:04:54.079 SO libspdk_bdev_gpt.so.6.0 00:04:54.079 CC module/bdev/nvme/bdev_mdns_client.o 00:04:54.079 SYMLINK libspdk_bdev_gpt.so 00:04:54.079 CC module/bdev/nvme/vbdev_opal.o 00:04:54.079 LIB libspdk_bdev_malloc.a 00:04:54.079 CC module/bdev/split/vbdev_split.o 00:04:54.079 SO libspdk_bdev_malloc.so.6.0 00:04:54.340 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:54.340 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:54.340 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:54.340 SYMLINK libspdk_bdev_malloc.so 00:04:54.340 LIB libspdk_bdev_lvol.a 00:04:54.340 CC module/bdev/split/vbdev_split_rpc.o 00:04:54.340 SO libspdk_bdev_lvol.so.6.0 00:04:54.340 CC module/bdev/raid/bdev_raid_rpc.o 00:04:54.340 LIB libspdk_bdev_passthru.a 00:04:54.340 SYMLINK libspdk_bdev_lvol.so 00:04:54.597 SO libspdk_bdev_passthru.so.6.0 00:04:54.597 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:54.597 CC module/bdev/raid/bdev_raid_sb.o 00:04:54.597 SYMLINK libspdk_bdev_passthru.so 00:04:54.597 LIB libspdk_bdev_split.a 00:04:54.597 SO libspdk_bdev_split.so.6.0 00:04:54.597 CC module/bdev/uring/bdev_uring.o 00:04:54.597 CC module/bdev/aio/bdev_aio.o 00:04:54.597 SYMLINK libspdk_bdev_split.so 00:04:54.597 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:54.597 CC module/bdev/aio/bdev_aio_rpc.o 00:04:54.597 CC module/bdev/ftl/bdev_ftl.o 00:04:54.855 CC module/bdev/iscsi/bdev_iscsi.o 00:04:54.855 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:54.855 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:54.855 LIB libspdk_bdev_zone_block.a 00:04:54.855 SO libspdk_bdev_zone_block.so.6.0 00:04:55.114 CC module/bdev/raid/raid0.o 00:04:55.114 CC module/bdev/uring/bdev_uring_rpc.o 00:04:55.114 LIB libspdk_bdev_aio.a 00:04:55.114 SYMLINK libspdk_bdev_zone_block.so 00:04:55.114 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:55.114 CC module/bdev/raid/raid1.o 00:04:55.114 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:55.114 SO libspdk_bdev_aio.so.6.0 00:04:55.114 CC module/bdev/raid/concat.o 00:04:55.114 LIB libspdk_bdev_iscsi.a 00:04:55.114 SYMLINK libspdk_bdev_aio.so 00:04:55.114 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:55.114 LIB libspdk_bdev_ftl.a 00:04:55.114 SO libspdk_bdev_iscsi.so.6.0 00:04:55.114 SO libspdk_bdev_ftl.so.6.0 00:04:55.114 LIB libspdk_bdev_uring.a 00:04:55.373 SYMLINK libspdk_bdev_iscsi.so 00:04:55.373 SYMLINK libspdk_bdev_ftl.so 00:04:55.373 SO libspdk_bdev_uring.so.6.0 00:04:55.373 SYMLINK libspdk_bdev_uring.so 00:04:55.373 LIB libspdk_bdev_raid.a 00:04:55.373 SO libspdk_bdev_raid.so.6.0 00:04:55.631 SYMLINK libspdk_bdev_raid.so 00:04:55.631 LIB libspdk_bdev_virtio.a 00:04:55.631 SO libspdk_bdev_virtio.so.6.0 00:04:55.631 SYMLINK libspdk_bdev_virtio.so 00:04:56.196 LIB libspdk_bdev_nvme.a 00:04:56.455 SO libspdk_bdev_nvme.so.7.1 00:04:56.455 SYMLINK libspdk_bdev_nvme.so 00:04:57.022 CC module/event/subsystems/vmd/vmd.o 00:04:57.022 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:57.022 CC module/event/subsystems/fsdev/fsdev.o 00:04:57.022 CC module/event/subsystems/iobuf/iobuf.o 00:04:57.022 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:57.022 CC module/event/subsystems/scheduler/scheduler.o 00:04:57.022 CC module/event/subsystems/sock/sock.o 00:04:57.022 CC module/event/subsystems/keyring/keyring.o 00:04:57.022 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:57.022 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:57.022 LIB libspdk_event_vmd.a 00:04:57.022 LIB libspdk_event_scheduler.a 00:04:57.022 LIB libspdk_event_keyring.a 00:04:57.022 LIB libspdk_event_fsdev.a 00:04:57.022 LIB libspdk_event_sock.a 00:04:57.022 LIB libspdk_event_iobuf.a 00:04:57.022 LIB libspdk_event_vhost_blk.a 00:04:57.280 SO libspdk_event_fsdev.so.1.0 00:04:57.280 SO libspdk_event_scheduler.so.4.0 00:04:57.280 SO libspdk_event_keyring.so.1.0 00:04:57.280 SO libspdk_event_vmd.so.6.0 00:04:57.280 SO libspdk_event_sock.so.5.0 00:04:57.280 LIB libspdk_event_vfu_tgt.a 00:04:57.280 SO libspdk_event_vhost_blk.so.3.0 00:04:57.280 SO libspdk_event_iobuf.so.3.0 00:04:57.280 SO libspdk_event_vfu_tgt.so.3.0 00:04:57.280 SYMLINK libspdk_event_scheduler.so 00:04:57.280 SYMLINK libspdk_event_vmd.so 00:04:57.280 SYMLINK libspdk_event_keyring.so 00:04:57.280 SYMLINK libspdk_event_fsdev.so 00:04:57.280 SYMLINK libspdk_event_sock.so 00:04:57.280 SYMLINK libspdk_event_vfu_tgt.so 00:04:57.280 SYMLINK libspdk_event_iobuf.so 00:04:57.280 SYMLINK libspdk_event_vhost_blk.so 00:04:57.539 CC module/event/subsystems/accel/accel.o 00:04:57.797 LIB libspdk_event_accel.a 00:04:57.797 SO libspdk_event_accel.so.6.0 00:04:57.797 SYMLINK libspdk_event_accel.so 00:04:58.056 CC module/event/subsystems/bdev/bdev.o 00:04:58.314 LIB libspdk_event_bdev.a 00:04:58.314 SO libspdk_event_bdev.so.6.0 00:04:58.314 SYMLINK libspdk_event_bdev.so 00:04:58.572 CC module/event/subsystems/ublk/ublk.o 00:04:58.572 CC module/event/subsystems/scsi/scsi.o 00:04:58.572 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:58.572 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:58.572 CC module/event/subsystems/nbd/nbd.o 00:04:58.830 LIB libspdk_event_scsi.a 00:04:58.830 LIB libspdk_event_nbd.a 00:04:58.830 LIB libspdk_event_ublk.a 00:04:58.830 SO libspdk_event_scsi.so.6.0 00:04:58.830 SO libspdk_event_nbd.so.6.0 00:04:58.830 SO libspdk_event_ublk.so.3.0 00:04:58.830 SYMLINK libspdk_event_scsi.so 00:04:58.830 LIB libspdk_event_nvmf.a 00:04:58.830 SYMLINK libspdk_event_nbd.so 00:04:58.830 SYMLINK libspdk_event_ublk.so 00:04:58.830 SO libspdk_event_nvmf.so.6.0 00:04:59.088 SYMLINK libspdk_event_nvmf.so 00:04:59.088 CC module/event/subsystems/iscsi/iscsi.o 00:04:59.088 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:59.347 LIB libspdk_event_vhost_scsi.a 00:04:59.347 LIB libspdk_event_iscsi.a 00:04:59.347 SO libspdk_event_vhost_scsi.so.3.0 00:04:59.347 SO libspdk_event_iscsi.so.6.0 00:04:59.347 SYMLINK libspdk_event_iscsi.so 00:04:59.347 SYMLINK libspdk_event_vhost_scsi.so 00:04:59.605 SO libspdk.so.6.0 00:04:59.606 SYMLINK libspdk.so 00:04:59.864 CC app/trace_record/trace_record.o 00:04:59.864 CXX app/trace/trace.o 00:04:59.864 TEST_HEADER include/spdk/accel.h 00:04:59.864 TEST_HEADER include/spdk/accel_module.h 00:04:59.864 TEST_HEADER include/spdk/assert.h 00:04:59.864 TEST_HEADER include/spdk/barrier.h 00:04:59.864 TEST_HEADER include/spdk/base64.h 00:04:59.864 TEST_HEADER include/spdk/bdev.h 00:04:59.864 TEST_HEADER include/spdk/bdev_module.h 00:04:59.864 TEST_HEADER include/spdk/bdev_zone.h 00:04:59.864 TEST_HEADER include/spdk/bit_array.h 00:04:59.864 TEST_HEADER include/spdk/bit_pool.h 00:04:59.864 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:59.864 TEST_HEADER include/spdk/blob_bdev.h 00:04:59.864 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:59.864 TEST_HEADER include/spdk/blobfs.h 00:04:59.864 TEST_HEADER include/spdk/blob.h 00:04:59.864 TEST_HEADER include/spdk/conf.h 00:04:59.864 TEST_HEADER include/spdk/config.h 00:04:59.864 TEST_HEADER include/spdk/cpuset.h 00:04:59.864 TEST_HEADER include/spdk/crc16.h 00:04:59.864 TEST_HEADER include/spdk/crc32.h 00:04:59.864 TEST_HEADER include/spdk/crc64.h 00:04:59.864 TEST_HEADER include/spdk/dif.h 00:04:59.864 CC app/nvmf_tgt/nvmf_main.o 00:04:59.864 TEST_HEADER include/spdk/dma.h 00:04:59.864 TEST_HEADER include/spdk/endian.h 00:04:59.864 TEST_HEADER include/spdk/env_dpdk.h 00:04:59.864 TEST_HEADER include/spdk/env.h 00:04:59.864 TEST_HEADER include/spdk/event.h 00:04:59.864 TEST_HEADER include/spdk/fd_group.h 00:04:59.864 TEST_HEADER include/spdk/fd.h 00:04:59.864 TEST_HEADER include/spdk/file.h 00:04:59.864 TEST_HEADER include/spdk/fsdev.h 00:04:59.864 CC examples/util/zipf/zipf.o 00:04:59.864 CC examples/ioat/perf/perf.o 00:04:59.864 TEST_HEADER include/spdk/fsdev_module.h 00:04:59.864 TEST_HEADER include/spdk/ftl.h 00:04:59.864 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:59.864 TEST_HEADER include/spdk/gpt_spec.h 00:04:59.864 CC test/thread/poller_perf/poller_perf.o 00:04:59.864 TEST_HEADER include/spdk/hexlify.h 00:04:59.864 TEST_HEADER include/spdk/histogram_data.h 00:04:59.864 TEST_HEADER include/spdk/idxd.h 00:04:59.864 TEST_HEADER include/spdk/idxd_spec.h 00:04:59.864 TEST_HEADER include/spdk/init.h 00:04:59.864 TEST_HEADER include/spdk/ioat.h 00:04:59.864 TEST_HEADER include/spdk/ioat_spec.h 00:04:59.864 TEST_HEADER include/spdk/iscsi_spec.h 00:04:59.864 TEST_HEADER include/spdk/json.h 00:04:59.864 TEST_HEADER include/spdk/jsonrpc.h 00:04:59.864 TEST_HEADER include/spdk/keyring.h 00:04:59.864 TEST_HEADER include/spdk/keyring_module.h 00:04:59.864 TEST_HEADER include/spdk/likely.h 00:04:59.864 TEST_HEADER include/spdk/log.h 00:04:59.864 TEST_HEADER include/spdk/lvol.h 00:04:59.864 CC test/dma/test_dma/test_dma.o 00:04:59.864 TEST_HEADER include/spdk/md5.h 00:04:59.864 TEST_HEADER include/spdk/memory.h 00:04:59.864 TEST_HEADER include/spdk/mmio.h 00:04:59.864 TEST_HEADER include/spdk/nbd.h 00:04:59.864 TEST_HEADER include/spdk/net.h 00:04:59.864 TEST_HEADER include/spdk/notify.h 00:04:59.864 TEST_HEADER include/spdk/nvme.h 00:04:59.864 TEST_HEADER include/spdk/nvme_intel.h 00:04:59.864 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:59.864 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:59.864 TEST_HEADER include/spdk/nvme_spec.h 00:04:59.864 CC test/app/bdev_svc/bdev_svc.o 00:04:59.864 TEST_HEADER include/spdk/nvme_zns.h 00:04:59.864 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:59.864 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:59.864 TEST_HEADER include/spdk/nvmf.h 00:04:59.864 TEST_HEADER include/spdk/nvmf_spec.h 00:04:59.864 TEST_HEADER include/spdk/nvmf_transport.h 00:04:59.864 TEST_HEADER include/spdk/opal.h 00:04:59.864 TEST_HEADER include/spdk/opal_spec.h 00:04:59.864 TEST_HEADER include/spdk/pci_ids.h 00:04:59.864 TEST_HEADER include/spdk/pipe.h 00:04:59.864 TEST_HEADER include/spdk/queue.h 00:04:59.864 TEST_HEADER include/spdk/reduce.h 00:04:59.864 TEST_HEADER include/spdk/rpc.h 00:04:59.864 TEST_HEADER include/spdk/scheduler.h 00:04:59.864 TEST_HEADER include/spdk/scsi.h 00:05:00.140 TEST_HEADER include/spdk/scsi_spec.h 00:05:00.140 TEST_HEADER include/spdk/sock.h 00:05:00.140 TEST_HEADER include/spdk/stdinc.h 00:05:00.140 TEST_HEADER include/spdk/string.h 00:05:00.140 TEST_HEADER include/spdk/thread.h 00:05:00.140 TEST_HEADER include/spdk/trace.h 00:05:00.140 TEST_HEADER include/spdk/trace_parser.h 00:05:00.140 TEST_HEADER include/spdk/tree.h 00:05:00.140 TEST_HEADER include/spdk/ublk.h 00:05:00.140 TEST_HEADER include/spdk/util.h 00:05:00.140 TEST_HEADER include/spdk/uuid.h 00:05:00.140 TEST_HEADER include/spdk/version.h 00:05:00.140 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:00.140 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:00.140 TEST_HEADER include/spdk/vhost.h 00:05:00.140 TEST_HEADER include/spdk/vmd.h 00:05:00.140 TEST_HEADER include/spdk/xor.h 00:05:00.140 TEST_HEADER include/spdk/zipf.h 00:05:00.140 CXX test/cpp_headers/accel.o 00:05:00.140 LINK interrupt_tgt 00:05:00.140 LINK zipf 00:05:00.140 LINK poller_perf 00:05:00.140 LINK spdk_trace_record 00:05:00.140 LINK nvmf_tgt 00:05:00.140 LINK ioat_perf 00:05:00.140 LINK bdev_svc 00:05:00.140 CXX test/cpp_headers/accel_module.o 00:05:00.398 LINK spdk_trace 00:05:00.398 CC app/spdk_lspci/spdk_lspci.o 00:05:00.398 CC app/iscsi_tgt/iscsi_tgt.o 00:05:00.398 CC examples/ioat/verify/verify.o 00:05:00.398 CXX test/cpp_headers/assert.o 00:05:00.398 CC app/spdk_nvme_perf/perf.o 00:05:00.398 CC app/spdk_tgt/spdk_tgt.o 00:05:00.398 LINK test_dma 00:05:00.656 LINK spdk_lspci 00:05:00.656 CC test/env/mem_callbacks/mem_callbacks.o 00:05:00.656 CC app/spdk_nvme_identify/identify.o 00:05:00.656 CXX test/cpp_headers/barrier.o 00:05:00.656 LINK iscsi_tgt 00:05:00.656 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:00.656 LINK verify 00:05:00.656 LINK spdk_tgt 00:05:00.915 CXX test/cpp_headers/base64.o 00:05:00.915 CC test/env/vtophys/vtophys.o 00:05:00.915 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:00.915 CC test/env/memory/memory_ut.o 00:05:00.915 LINK vtophys 00:05:00.915 CXX test/cpp_headers/bdev.o 00:05:00.915 LINK env_dpdk_post_init 00:05:01.173 CC examples/thread/thread/thread_ex.o 00:05:01.173 CC test/env/pci/pci_ut.o 00:05:01.173 LINK nvme_fuzz 00:05:01.173 LINK mem_callbacks 00:05:01.173 CXX test/cpp_headers/bdev_module.o 00:05:01.173 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:01.442 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:01.442 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:01.442 LINK spdk_nvme_perf 00:05:01.442 LINK thread 00:05:01.442 CXX test/cpp_headers/bdev_zone.o 00:05:01.442 LINK spdk_nvme_identify 00:05:01.442 CC examples/sock/hello_world/hello_sock.o 00:05:01.442 LINK pci_ut 00:05:01.700 CXX test/cpp_headers/bit_array.o 00:05:01.700 CC examples/vmd/lsvmd/lsvmd.o 00:05:01.700 CC examples/vmd/led/led.o 00:05:01.700 CC examples/idxd/perf/perf.o 00:05:01.700 CC app/spdk_nvme_discover/discovery_aer.o 00:05:01.700 LINK hello_sock 00:05:01.700 LINK vhost_fuzz 00:05:01.959 CXX test/cpp_headers/bit_pool.o 00:05:01.959 LINK lsvmd 00:05:01.959 LINK led 00:05:01.959 CC test/rpc_client/rpc_client_test.o 00:05:01.959 LINK spdk_nvme_discover 00:05:01.959 CXX test/cpp_headers/blob_bdev.o 00:05:02.217 LINK idxd_perf 00:05:02.217 LINK rpc_client_test 00:05:02.217 CC examples/accel/perf/accel_perf.o 00:05:02.217 CC examples/blob/hello_world/hello_blob.o 00:05:02.217 CC examples/blob/cli/blobcli.o 00:05:02.217 LINK memory_ut 00:05:02.217 CC examples/nvme/hello_world/hello_world.o 00:05:02.217 CXX test/cpp_headers/blobfs_bdev.o 00:05:02.217 CC app/spdk_top/spdk_top.o 00:05:02.217 CXX test/cpp_headers/blobfs.o 00:05:02.217 CXX test/cpp_headers/blob.o 00:05:02.475 CXX test/cpp_headers/conf.o 00:05:02.475 LINK hello_world 00:05:02.475 LINK hello_blob 00:05:02.734 CC test/app/jsoncat/jsoncat.o 00:05:02.734 CC test/app/histogram_perf/histogram_perf.o 00:05:02.734 CXX test/cpp_headers/config.o 00:05:02.734 CXX test/cpp_headers/cpuset.o 00:05:02.734 LINK accel_perf 00:05:02.734 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:02.734 CXX test/cpp_headers/crc16.o 00:05:02.734 CC examples/nvme/reconnect/reconnect.o 00:05:02.734 LINK blobcli 00:05:02.734 LINK jsoncat 00:05:02.734 LINK histogram_perf 00:05:02.992 CXX test/cpp_headers/crc32.o 00:05:02.992 CC test/app/stub/stub.o 00:05:02.992 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:02.992 LINK hello_fsdev 00:05:02.992 LINK iscsi_fuzz 00:05:02.992 CC examples/nvme/arbitration/arbitration.o 00:05:02.992 CC examples/nvme/hotplug/hotplug.o 00:05:02.992 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:02.992 CXX test/cpp_headers/crc64.o 00:05:03.250 LINK reconnect 00:05:03.250 LINK spdk_top 00:05:03.250 LINK stub 00:05:03.250 CXX test/cpp_headers/dif.o 00:05:03.250 LINK cmb_copy 00:05:03.250 LINK hotplug 00:05:03.509 CC examples/nvme/abort/abort.o 00:05:03.509 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:03.509 LINK arbitration 00:05:03.509 CXX test/cpp_headers/dma.o 00:05:03.509 LINK nvme_manage 00:05:03.509 CXX test/cpp_headers/endian.o 00:05:03.509 CC app/vhost/vhost.o 00:05:03.509 CC examples/bdev/hello_world/hello_bdev.o 00:05:03.509 LINK pmr_persistence 00:05:03.767 CXX test/cpp_headers/env_dpdk.o 00:05:03.767 CC test/accel/dif/dif.o 00:05:03.767 CC examples/bdev/bdevperf/bdevperf.o 00:05:03.767 CXX test/cpp_headers/env.o 00:05:03.767 LINK vhost 00:05:03.767 CC app/spdk_dd/spdk_dd.o 00:05:03.767 CXX test/cpp_headers/event.o 00:05:03.767 LINK abort 00:05:03.767 CXX test/cpp_headers/fd_group.o 00:05:03.767 LINK hello_bdev 00:05:04.025 CC app/fio/nvme/fio_plugin.o 00:05:04.025 CXX test/cpp_headers/fd.o 00:05:04.283 CC test/blobfs/mkfs/mkfs.o 00:05:04.283 CC app/fio/bdev/fio_plugin.o 00:05:04.283 CC test/event/event_perf/event_perf.o 00:05:04.283 CXX test/cpp_headers/file.o 00:05:04.283 CC test/lvol/esnap/esnap.o 00:05:04.283 LINK spdk_dd 00:05:04.283 CC test/nvme/aer/aer.o 00:05:04.283 LINK event_perf 00:05:04.283 LINK dif 00:05:04.283 LINK mkfs 00:05:04.541 CXX test/cpp_headers/fsdev.o 00:05:04.541 LINK bdevperf 00:05:04.541 LINK spdk_nvme 00:05:04.541 LINK aer 00:05:04.541 CXX test/cpp_headers/fsdev_module.o 00:05:04.541 CC test/nvme/reset/reset.o 00:05:04.798 CC test/event/reactor/reactor.o 00:05:04.798 LINK spdk_bdev 00:05:04.798 CXX test/cpp_headers/ftl.o 00:05:04.798 CC test/nvme/sgl/sgl.o 00:05:04.798 CC test/event/reactor_perf/reactor_perf.o 00:05:04.798 LINK reactor 00:05:05.057 CXX test/cpp_headers/fuse_dispatcher.o 00:05:05.057 CC test/nvme/e2edp/nvme_dp.o 00:05:05.057 LINK reset 00:05:05.057 LINK reactor_perf 00:05:05.057 LINK sgl 00:05:05.057 CC test/nvme/overhead/overhead.o 00:05:05.057 CC examples/nvmf/nvmf/nvmf.o 00:05:05.057 CC test/bdev/bdevio/bdevio.o 00:05:05.057 CC test/event/app_repeat/app_repeat.o 00:05:05.057 CXX test/cpp_headers/gpt_spec.o 00:05:05.315 CXX test/cpp_headers/hexlify.o 00:05:05.315 CXX test/cpp_headers/histogram_data.o 00:05:05.315 LINK app_repeat 00:05:05.315 LINK nvme_dp 00:05:05.315 CC test/nvme/err_injection/err_injection.o 00:05:05.315 LINK overhead 00:05:05.573 CXX test/cpp_headers/idxd.o 00:05:05.573 CXX test/cpp_headers/idxd_spec.o 00:05:05.573 CC test/nvme/startup/startup.o 00:05:05.573 LINK nvmf 00:05:05.573 LINK bdevio 00:05:05.573 LINK err_injection 00:05:05.573 CXX test/cpp_headers/init.o 00:05:05.573 LINK startup 00:05:05.573 CC test/nvme/reserve/reserve.o 00:05:05.834 CC test/event/scheduler/scheduler.o 00:05:05.834 CC test/nvme/simple_copy/simple_copy.o 00:05:05.834 CC test/nvme/connect_stress/connect_stress.o 00:05:05.834 CXX test/cpp_headers/ioat.o 00:05:05.834 CC test/nvme/boot_partition/boot_partition.o 00:05:05.834 LINK reserve 00:05:05.834 CC test/nvme/fused_ordering/fused_ordering.o 00:05:05.834 CC test/nvme/compliance/nvme_compliance.o 00:05:06.093 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:06.093 LINK simple_copy 00:05:06.093 LINK connect_stress 00:05:06.093 LINK scheduler 00:05:06.093 CXX test/cpp_headers/ioat_spec.o 00:05:06.093 LINK boot_partition 00:05:06.093 LINK fused_ordering 00:05:06.093 LINK doorbell_aers 00:05:06.093 CXX test/cpp_headers/iscsi_spec.o 00:05:06.351 CC test/nvme/fdp/fdp.o 00:05:06.351 CXX test/cpp_headers/json.o 00:05:06.351 CC test/nvme/cuse/cuse.o 00:05:06.351 CXX test/cpp_headers/jsonrpc.o 00:05:06.351 CXX test/cpp_headers/keyring.o 00:05:06.351 LINK nvme_compliance 00:05:06.351 CXX test/cpp_headers/keyring_module.o 00:05:06.351 CXX test/cpp_headers/likely.o 00:05:06.351 CXX test/cpp_headers/log.o 00:05:06.351 CXX test/cpp_headers/lvol.o 00:05:06.351 CXX test/cpp_headers/md5.o 00:05:06.610 CXX test/cpp_headers/memory.o 00:05:06.610 CXX test/cpp_headers/mmio.o 00:05:06.610 CXX test/cpp_headers/nbd.o 00:05:06.610 CXX test/cpp_headers/net.o 00:05:06.610 CXX test/cpp_headers/notify.o 00:05:06.610 CXX test/cpp_headers/nvme.o 00:05:06.610 LINK fdp 00:05:06.610 CXX test/cpp_headers/nvme_intel.o 00:05:06.610 CXX test/cpp_headers/nvme_ocssd.o 00:05:06.610 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:06.610 CXX test/cpp_headers/nvme_spec.o 00:05:06.868 CXX test/cpp_headers/nvme_zns.o 00:05:06.868 CXX test/cpp_headers/nvmf_cmd.o 00:05:06.868 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:06.868 CXX test/cpp_headers/nvmf.o 00:05:06.868 CXX test/cpp_headers/nvmf_spec.o 00:05:06.868 CXX test/cpp_headers/nvmf_transport.o 00:05:06.868 CXX test/cpp_headers/opal.o 00:05:06.868 CXX test/cpp_headers/opal_spec.o 00:05:06.868 CXX test/cpp_headers/pci_ids.o 00:05:07.127 CXX test/cpp_headers/pipe.o 00:05:07.127 CXX test/cpp_headers/queue.o 00:05:07.127 CXX test/cpp_headers/reduce.o 00:05:07.127 CXX test/cpp_headers/rpc.o 00:05:07.127 CXX test/cpp_headers/scheduler.o 00:05:07.127 CXX test/cpp_headers/scsi.o 00:05:07.127 CXX test/cpp_headers/scsi_spec.o 00:05:07.127 CXX test/cpp_headers/sock.o 00:05:07.127 CXX test/cpp_headers/stdinc.o 00:05:07.127 CXX test/cpp_headers/string.o 00:05:07.127 CXX test/cpp_headers/thread.o 00:05:07.127 CXX test/cpp_headers/trace.o 00:05:07.127 CXX test/cpp_headers/trace_parser.o 00:05:07.386 CXX test/cpp_headers/tree.o 00:05:07.386 CXX test/cpp_headers/ublk.o 00:05:07.386 CXX test/cpp_headers/util.o 00:05:07.386 CXX test/cpp_headers/uuid.o 00:05:07.386 CXX test/cpp_headers/version.o 00:05:07.386 CXX test/cpp_headers/vfio_user_pci.o 00:05:07.386 CXX test/cpp_headers/vfio_user_spec.o 00:05:07.386 CXX test/cpp_headers/vhost.o 00:05:07.386 CXX test/cpp_headers/vmd.o 00:05:07.386 CXX test/cpp_headers/xor.o 00:05:07.386 CXX test/cpp_headers/zipf.o 00:05:07.644 LINK cuse 00:05:09.548 LINK esnap 00:05:10.115 00:05:10.115 real 1m26.805s 00:05:10.115 user 7m9.654s 00:05:10.115 sys 1m11.666s 00:05:10.115 10:54:15 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:10.115 ************************************ 00:05:10.115 END TEST make 00:05:10.115 ************************************ 00:05:10.115 10:54:15 make -- common/autotest_common.sh@10 -- $ set +x 00:05:10.115 10:54:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:10.115 10:54:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:10.115 10:54:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:10.115 10:54:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.115 10:54:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:10.115 10:54:15 -- pm/common@44 -- $ pid=6044 00:05:10.115 10:54:15 -- pm/common@50 -- $ kill -TERM 6044 00:05:10.115 10:54:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.115 10:54:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:10.115 10:54:15 -- pm/common@44 -- $ pid=6046 00:05:10.115 10:54:15 -- pm/common@50 -- $ kill -TERM 6046 00:05:10.115 10:54:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:10.115 10:54:15 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:10.115 10:54:15 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:10.115 10:54:15 -- common/autotest_common.sh@1691 -- # lcov --version 00:05:10.115 10:54:15 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:10.374 10:54:15 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:10.374 10:54:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.374 10:54:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.374 10:54:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.374 10:54:15 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.374 10:54:15 -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.374 10:54:15 -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.374 10:54:15 -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.374 10:54:15 -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.374 10:54:15 -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.374 10:54:15 -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.374 10:54:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.374 10:54:15 -- scripts/common.sh@344 -- # case "$op" in 00:05:10.374 10:54:15 -- scripts/common.sh@345 -- # : 1 00:05:10.374 10:54:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.374 10:54:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.374 10:54:15 -- scripts/common.sh@365 -- # decimal 1 00:05:10.374 10:54:15 -- scripts/common.sh@353 -- # local d=1 00:05:10.374 10:54:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.374 10:54:15 -- scripts/common.sh@355 -- # echo 1 00:05:10.374 10:54:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.374 10:54:15 -- scripts/common.sh@366 -- # decimal 2 00:05:10.374 10:54:15 -- scripts/common.sh@353 -- # local d=2 00:05:10.374 10:54:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.374 10:54:15 -- scripts/common.sh@355 -- # echo 2 00:05:10.374 10:54:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.374 10:54:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.374 10:54:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.374 10:54:15 -- scripts/common.sh@368 -- # return 0 00:05:10.374 10:54:15 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.374 10:54:15 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:10.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.374 --rc genhtml_branch_coverage=1 00:05:10.374 --rc genhtml_function_coverage=1 00:05:10.374 --rc genhtml_legend=1 00:05:10.374 --rc geninfo_all_blocks=1 00:05:10.374 --rc geninfo_unexecuted_blocks=1 00:05:10.374 00:05:10.374 ' 00:05:10.374 10:54:15 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:10.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.374 --rc genhtml_branch_coverage=1 00:05:10.374 --rc genhtml_function_coverage=1 00:05:10.374 --rc genhtml_legend=1 00:05:10.374 --rc geninfo_all_blocks=1 00:05:10.374 --rc geninfo_unexecuted_blocks=1 00:05:10.374 00:05:10.374 ' 00:05:10.374 10:54:15 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:10.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.374 --rc genhtml_branch_coverage=1 00:05:10.374 --rc genhtml_function_coverage=1 00:05:10.374 --rc genhtml_legend=1 00:05:10.374 --rc geninfo_all_blocks=1 00:05:10.374 --rc geninfo_unexecuted_blocks=1 00:05:10.374 00:05:10.374 ' 00:05:10.374 10:54:15 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:10.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.374 --rc genhtml_branch_coverage=1 00:05:10.374 --rc genhtml_function_coverage=1 00:05:10.374 --rc genhtml_legend=1 00:05:10.374 --rc geninfo_all_blocks=1 00:05:10.374 --rc geninfo_unexecuted_blocks=1 00:05:10.375 00:05:10.375 ' 00:05:10.375 10:54:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:10.375 10:54:15 -- nvmf/common.sh@7 -- # uname -s 00:05:10.375 10:54:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.375 10:54:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.375 10:54:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.375 10:54:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.375 10:54:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.375 10:54:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.375 10:54:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.375 10:54:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.375 10:54:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.375 10:54:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.375 10:54:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:05:10.375 10:54:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:05:10.375 10:54:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.375 10:54:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.375 10:54:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:10.375 10:54:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.375 10:54:15 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.375 10:54:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.375 10:54:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.375 10:54:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.375 10:54:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.375 10:54:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.375 10:54:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.375 10:54:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.375 10:54:15 -- paths/export.sh@5 -- # export PATH 00:05:10.375 10:54:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.375 10:54:15 -- nvmf/common.sh@51 -- # : 0 00:05:10.375 10:54:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.375 10:54:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.375 10:54:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.375 10:54:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.375 10:54:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.375 10:54:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.375 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.375 10:54:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.375 10:54:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.375 10:54:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.375 10:54:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:10.375 10:54:15 -- spdk/autotest.sh@32 -- # uname -s 00:05:10.375 10:54:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:10.375 10:54:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:10.375 10:54:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:10.375 10:54:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:10.375 10:54:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:10.375 10:54:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:10.375 10:54:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:10.375 10:54:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:10.375 10:54:15 -- spdk/autotest.sh@48 -- # udevadm_pid=67575 00:05:10.375 10:54:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:10.375 10:54:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:10.375 10:54:15 -- pm/common@17 -- # local monitor 00:05:10.375 10:54:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.375 10:54:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:10.375 10:54:15 -- pm/common@25 -- # sleep 1 00:05:10.375 10:54:15 -- pm/common@21 -- # date +%s 00:05:10.375 10:54:15 -- pm/common@21 -- # date +%s 00:05:10.375 10:54:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730199255 00:05:10.375 10:54:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730199255 00:05:10.375 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730199255_collect-vmstat.pm.log 00:05:10.375 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730199255_collect-cpu-load.pm.log 00:05:11.311 10:54:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:11.311 10:54:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:11.311 10:54:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.311 10:54:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.311 10:54:16 -- spdk/autotest.sh@59 -- # create_test_list 00:05:11.311 10:54:16 -- common/autotest_common.sh@750 -- # xtrace_disable 00:05:11.311 10:54:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.570 10:54:16 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:11.570 10:54:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:11.570 10:54:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:11.570 10:54:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:11.570 10:54:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:11.570 10:54:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:11.570 10:54:16 -- common/autotest_common.sh@1455 -- # uname 00:05:11.570 10:54:16 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:11.570 10:54:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:11.570 10:54:16 -- common/autotest_common.sh@1475 -- # uname 00:05:11.570 10:54:16 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:11.570 10:54:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:11.570 10:54:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:11.570 lcov: LCOV version 1.15 00:05:11.570 10:54:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:29.655 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:29.655 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:44.534 10:54:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:44.534 10:54:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:44.534 10:54:49 -- common/autotest_common.sh@10 -- # set +x 00:05:44.534 10:54:49 -- spdk/autotest.sh@78 -- # rm -f 00:05:44.534 10:54:49 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.796 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:44.796 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:44.796 10:54:50 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:44.796 10:54:50 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:44.796 10:54:50 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:44.796 10:54:50 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:44.796 10:54:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:44.796 10:54:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:44.796 10:54:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:44.796 10:54:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:44.796 10:54:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:44.796 10:54:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:44.796 10:54:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:44.796 10:54:50 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:44.796 10:54:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:44.796 10:54:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:44.796 10:54:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:44.796 10:54:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:44.796 10:54:50 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:44.796 10:54:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:44.796 10:54:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:44.796 10:54:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:44.796 10:54:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:44.796 10:54:50 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:44.796 10:54:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:44.796 10:54:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:44.796 10:54:50 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:44.796 10:54:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:44.796 10:54:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:44.796 10:54:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:44.796 10:54:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:44.796 10:54:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:44.796 No valid GPT data, bailing 00:05:44.796 10:54:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:44.796 10:54:50 -- scripts/common.sh@394 -- # pt= 00:05:44.796 10:54:50 -- scripts/common.sh@395 -- # return 1 00:05:44.796 10:54:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:44.796 1+0 records in 00:05:44.796 1+0 records out 00:05:44.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00381834 s, 275 MB/s 00:05:44.796 10:54:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:44.796 10:54:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:44.796 10:54:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:44.796 10:54:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:44.796 10:54:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:45.055 No valid GPT data, bailing 00:05:45.055 10:54:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:45.055 10:54:50 -- scripts/common.sh@394 -- # pt= 00:05:45.055 10:54:50 -- scripts/common.sh@395 -- # return 1 00:05:45.055 10:54:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:45.055 1+0 records in 00:05:45.055 1+0 records out 00:05:45.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465505 s, 225 MB/s 00:05:45.055 10:54:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:45.055 10:54:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:45.055 10:54:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:45.055 10:54:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:45.055 10:54:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:45.055 No valid GPT data, bailing 00:05:45.055 10:54:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:45.055 10:54:50 -- scripts/common.sh@394 -- # pt= 00:05:45.055 10:54:50 -- scripts/common.sh@395 -- # return 1 00:05:45.055 10:54:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:45.055 1+0 records in 00:05:45.055 1+0 records out 00:05:45.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00376127 s, 279 MB/s 00:05:45.055 10:54:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:45.055 10:54:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:45.055 10:54:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:45.055 10:54:50 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:45.055 10:54:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:45.055 No valid GPT data, bailing 00:05:45.055 10:54:50 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:45.055 10:54:50 -- scripts/common.sh@394 -- # pt= 00:05:45.055 10:54:50 -- scripts/common.sh@395 -- # return 1 00:05:45.055 10:54:50 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:45.055 1+0 records in 00:05:45.055 1+0 records out 00:05:45.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443314 s, 237 MB/s 00:05:45.055 10:54:50 -- spdk/autotest.sh@105 -- # sync 00:05:45.055 10:54:50 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:45.055 10:54:50 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:45.055 10:54:50 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:47.590 10:54:52 -- spdk/autotest.sh@111 -- # uname -s 00:05:47.590 10:54:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:47.590 10:54:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:47.590 10:54:52 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:47.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.849 Hugepages 00:05:47.849 node hugesize free / total 00:05:47.849 node0 1048576kB 0 / 0 00:05:47.849 node0 2048kB 0 / 0 00:05:47.849 00:05:47.849 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:47.849 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:48.107 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:48.107 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:48.107 10:54:53 -- spdk/autotest.sh@117 -- # uname -s 00:05:48.107 10:54:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:48.107 10:54:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:48.107 10:54:53 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:48.933 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:48.933 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:48.933 10:54:54 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:49.869 10:54:55 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:49.869 10:54:55 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:49.869 10:54:55 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:49.869 10:54:55 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:49.869 10:54:55 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:49.869 10:54:55 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:49.869 10:54:55 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:49.869 10:54:55 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:49.869 10:54:55 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:50.127 10:54:55 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:50.127 10:54:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:50.127 10:54:55 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:50.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.386 Waiting for block devices as requested 00:05:50.386 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:50.644 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:50.644 10:54:55 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:50.644 10:54:55 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:50.644 10:54:55 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:50.644 10:54:55 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:50.644 10:54:55 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:50.644 10:54:55 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:50.644 10:54:55 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:50.644 10:54:55 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:50.644 10:54:55 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:50.644 10:54:55 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:50.644 10:54:55 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:50.644 10:54:55 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:50.644 10:54:55 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:50.644 10:54:55 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:50.644 10:54:55 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:50.644 10:54:55 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:50.644 10:54:55 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:50.644 10:54:55 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:50.644 10:54:55 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:50.644 10:54:56 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:50.644 10:54:56 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:50.644 10:54:56 -- common/autotest_common.sh@1541 -- # continue 00:05:50.644 10:54:56 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:50.644 10:54:56 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:50.644 10:54:56 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:50.644 10:54:56 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:50.644 10:54:56 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:50.644 10:54:56 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:50.644 10:54:56 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:50.644 10:54:56 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:50.644 10:54:56 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:50.644 10:54:56 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:50.644 10:54:56 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:50.644 10:54:56 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:50.644 10:54:56 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:50.644 10:54:56 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:50.644 10:54:56 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:50.644 10:54:56 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:50.644 10:54:56 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:50.644 10:54:56 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:50.644 10:54:56 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:50.644 10:54:56 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:50.644 10:54:56 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:50.644 10:54:56 -- common/autotest_common.sh@1541 -- # continue 00:05:50.644 10:54:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:50.644 10:54:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:50.644 10:54:56 -- common/autotest_common.sh@10 -- # set +x 00:05:50.644 10:54:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:50.644 10:54:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:50.644 10:54:56 -- common/autotest_common.sh@10 -- # set +x 00:05:50.644 10:54:56 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:51.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:51.580 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:51.580 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:51.580 10:54:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:51.580 10:54:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:51.580 10:54:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.580 10:54:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:51.580 10:54:56 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:51.580 10:54:56 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:51.580 10:54:56 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:51.580 10:54:56 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:51.580 10:54:56 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:51.580 10:54:56 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:51.580 10:54:56 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:51.580 10:54:56 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:51.580 10:54:56 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:51.580 10:54:56 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.580 10:54:56 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:51.580 10:54:56 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:51.580 10:54:57 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:51.580 10:54:57 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:51.580 10:54:57 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:51.580 10:54:57 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:51.580 10:54:57 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:51.580 10:54:57 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:51.580 10:54:57 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:51.580 10:54:57 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:51.580 10:54:57 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:51.580 10:54:57 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:51.580 10:54:57 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:51.580 10:54:57 -- common/autotest_common.sh@1570 -- # return 0 00:05:51.580 10:54:57 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:51.580 10:54:57 -- common/autotest_common.sh@1578 -- # return 0 00:05:51.580 10:54:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:51.580 10:54:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:51.580 10:54:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:51.580 10:54:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:51.580 10:54:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:51.580 10:54:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:51.580 10:54:57 -- common/autotest_common.sh@10 -- # set +x 00:05:51.580 10:54:57 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:51.580 10:54:57 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:51.580 10:54:57 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:51.580 10:54:57 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:51.580 10:54:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.580 10:54:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.580 10:54:57 -- common/autotest_common.sh@10 -- # set +x 00:05:51.580 ************************************ 00:05:51.580 START TEST env 00:05:51.580 ************************************ 00:05:51.580 10:54:57 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:51.840 * Looking for test storage... 00:05:51.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:51.840 10:54:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.840 10:54:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.840 10:54:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.840 10:54:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.840 10:54:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.840 10:54:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.840 10:54:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.840 10:54:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.840 10:54:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.840 10:54:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.840 10:54:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.840 10:54:57 env -- scripts/common.sh@344 -- # case "$op" in 00:05:51.840 10:54:57 env -- scripts/common.sh@345 -- # : 1 00:05:51.840 10:54:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.840 10:54:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.840 10:54:57 env -- scripts/common.sh@365 -- # decimal 1 00:05:51.840 10:54:57 env -- scripts/common.sh@353 -- # local d=1 00:05:51.840 10:54:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.840 10:54:57 env -- scripts/common.sh@355 -- # echo 1 00:05:51.840 10:54:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.840 10:54:57 env -- scripts/common.sh@366 -- # decimal 2 00:05:51.840 10:54:57 env -- scripts/common.sh@353 -- # local d=2 00:05:51.840 10:54:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.840 10:54:57 env -- scripts/common.sh@355 -- # echo 2 00:05:51.840 10:54:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.840 10:54:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.840 10:54:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.840 10:54:57 env -- scripts/common.sh@368 -- # return 0 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.840 --rc genhtml_branch_coverage=1 00:05:51.840 --rc genhtml_function_coverage=1 00:05:51.840 --rc genhtml_legend=1 00:05:51.840 --rc geninfo_all_blocks=1 00:05:51.840 --rc geninfo_unexecuted_blocks=1 00:05:51.840 00:05:51.840 ' 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.840 --rc genhtml_branch_coverage=1 00:05:51.840 --rc genhtml_function_coverage=1 00:05:51.840 --rc genhtml_legend=1 00:05:51.840 --rc geninfo_all_blocks=1 00:05:51.840 --rc geninfo_unexecuted_blocks=1 00:05:51.840 00:05:51.840 ' 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.840 --rc genhtml_branch_coverage=1 00:05:51.840 --rc genhtml_function_coverage=1 00:05:51.840 --rc genhtml_legend=1 00:05:51.840 --rc geninfo_all_blocks=1 00:05:51.840 --rc geninfo_unexecuted_blocks=1 00:05:51.840 00:05:51.840 ' 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:51.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.840 --rc genhtml_branch_coverage=1 00:05:51.840 --rc genhtml_function_coverage=1 00:05:51.840 --rc genhtml_legend=1 00:05:51.840 --rc geninfo_all_blocks=1 00:05:51.840 --rc geninfo_unexecuted_blocks=1 00:05:51.840 00:05:51.840 ' 00:05:51.840 10:54:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.840 10:54:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.840 10:54:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.840 ************************************ 00:05:51.840 START TEST env_memory 00:05:51.840 ************************************ 00:05:51.840 10:54:57 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:51.840 00:05:51.840 00:05:51.840 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.840 http://cunit.sourceforge.net/ 00:05:51.840 00:05:51.840 00:05:51.840 Suite: memory 00:05:52.100 Test: alloc and free memory map ...[2024-10-29 10:54:57.363999] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:52.100 passed 00:05:52.100 Test: mem map translation ...[2024-10-29 10:54:57.394905] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:52.100 [2024-10-29 10:54:57.394941] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:52.100 [2024-10-29 10:54:57.394996] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:52.100 [2024-10-29 10:54:57.395006] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:52.100 passed 00:05:52.100 Test: mem map registration ...[2024-10-29 10:54:57.458582] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:52.100 [2024-10-29 10:54:57.458618] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:52.100 passed 00:05:52.100 Test: mem map adjacent registrations ...passed 00:05:52.100 00:05:52.100 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.100 suites 1 1 n/a 0 0 00:05:52.100 tests 4 4 4 0 0 00:05:52.100 asserts 152 152 152 0 n/a 00:05:52.100 00:05:52.100 Elapsed time = 0.213 seconds 00:05:52.100 00:05:52.100 real 0m0.231s 00:05:52.100 user 0m0.215s 00:05:52.100 sys 0m0.011s 00:05:52.100 10:54:57 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.100 ************************************ 00:05:52.100 END TEST env_memory 00:05:52.100 ************************************ 00:05:52.100 10:54:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:52.100 10:54:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:52.100 10:54:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:52.100 10:54:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.100 10:54:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.100 ************************************ 00:05:52.100 START TEST env_vtophys 00:05:52.100 ************************************ 00:05:52.100 10:54:57 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:52.359 EAL: lib.eal log level changed from notice to debug 00:05:52.359 EAL: Detected lcore 0 as core 0 on socket 0 00:05:52.359 EAL: Detected lcore 1 as core 0 on socket 0 00:05:52.359 EAL: Detected lcore 2 as core 0 on socket 0 00:05:52.359 EAL: Detected lcore 3 as core 0 on socket 0 00:05:52.359 EAL: Detected lcore 4 as core 0 on socket 0 00:05:52.359 EAL: Detected lcore 5 as core 0 on socket 0 00:05:52.359 EAL: Detected lcore 6 as core 0 on socket 0 00:05:52.359 EAL: Detected lcore 7 as core 0 on socket 0 00:05:52.359 EAL: Detected lcore 8 as core 0 on socket 0 00:05:52.359 EAL: Detected lcore 9 as core 0 on socket 0 00:05:52.359 EAL: Maximum logical cores by configuration: 128 00:05:52.359 EAL: Detected CPU lcores: 10 00:05:52.359 EAL: Detected NUMA nodes: 1 00:05:52.359 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:52.359 EAL: Detected shared linkage of DPDK 00:05:52.359 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:52.359 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:52.359 EAL: Registered [vdev] bus. 00:05:52.359 EAL: bus.vdev log level changed from disabled to notice 00:05:52.359 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:52.359 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:52.359 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:52.359 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:52.359 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:52.359 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:52.359 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:52.359 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:52.359 EAL: No shared files mode enabled, IPC will be disabled 00:05:52.359 EAL: No shared files mode enabled, IPC is disabled 00:05:52.359 EAL: Selected IOVA mode 'PA' 00:05:52.359 EAL: Probing VFIO support... 00:05:52.359 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:52.359 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:52.360 EAL: Ask a virtual area of 0x2e000 bytes 00:05:52.360 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:52.360 EAL: Setting up physically contiguous memory... 00:05:52.360 EAL: Setting maximum number of open files to 524288 00:05:52.360 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:52.360 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:52.360 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.360 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:52.360 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.360 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.360 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:52.360 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:52.360 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.360 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:52.360 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.360 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.360 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:52.360 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:52.360 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.360 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:52.360 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.360 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.360 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:52.360 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:52.360 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.360 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:52.360 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.360 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.360 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:52.360 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:52.360 EAL: Hugepages will be freed exactly as allocated. 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: TSC frequency is ~2200000 KHz 00:05:52.360 EAL: Main lcore 0 is ready (tid=7f9ae6876a00;cpuset=[0]) 00:05:52.360 EAL: Trying to obtain current memory policy. 00:05:52.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.360 EAL: Restoring previous memory policy: 0 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was expanded by 2MB 00:05:52.360 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:52.360 EAL: Mem event callback 'spdk:(nil)' registered 00:05:52.360 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:52.360 00:05:52.360 00:05:52.360 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.360 http://cunit.sourceforge.net/ 00:05:52.360 00:05:52.360 00:05:52.360 Suite: components_suite 00:05:52.360 Test: vtophys_malloc_test ...passed 00:05:52.360 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:52.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.360 EAL: Restoring previous memory policy: 4 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was expanded by 4MB 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was shrunk by 4MB 00:05:52.360 EAL: Trying to obtain current memory policy. 00:05:52.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.360 EAL: Restoring previous memory policy: 4 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was expanded by 6MB 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was shrunk by 6MB 00:05:52.360 EAL: Trying to obtain current memory policy. 00:05:52.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.360 EAL: Restoring previous memory policy: 4 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was expanded by 10MB 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was shrunk by 10MB 00:05:52.360 EAL: Trying to obtain current memory policy. 00:05:52.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.360 EAL: Restoring previous memory policy: 4 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was expanded by 18MB 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was shrunk by 18MB 00:05:52.360 EAL: Trying to obtain current memory policy. 00:05:52.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.360 EAL: Restoring previous memory policy: 4 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was expanded by 34MB 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was shrunk by 34MB 00:05:52.360 EAL: Trying to obtain current memory policy. 00:05:52.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.360 EAL: Restoring previous memory policy: 4 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was expanded by 66MB 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was shrunk by 66MB 00:05:52.360 EAL: Trying to obtain current memory policy. 00:05:52.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.360 EAL: Restoring previous memory policy: 4 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was expanded by 130MB 00:05:52.360 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.360 EAL: request: mp_malloc_sync 00:05:52.360 EAL: No shared files mode enabled, IPC is disabled 00:05:52.360 EAL: Heap on socket 0 was shrunk by 130MB 00:05:52.360 EAL: Trying to obtain current memory policy. 00:05:52.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.619 EAL: Restoring previous memory policy: 4 00:05:52.619 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.619 EAL: request: mp_malloc_sync 00:05:52.619 EAL: No shared files mode enabled, IPC is disabled 00:05:52.619 EAL: Heap on socket 0 was expanded by 258MB 00:05:52.619 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.619 EAL: request: mp_malloc_sync 00:05:52.619 EAL: No shared files mode enabled, IPC is disabled 00:05:52.619 EAL: Heap on socket 0 was shrunk by 258MB 00:05:52.619 EAL: Trying to obtain current memory policy. 00:05:52.619 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.619 EAL: Restoring previous memory policy: 4 00:05:52.619 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.619 EAL: request: mp_malloc_sync 00:05:52.619 EAL: No shared files mode enabled, IPC is disabled 00:05:52.619 EAL: Heap on socket 0 was expanded by 514MB 00:05:52.619 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.619 EAL: request: mp_malloc_sync 00:05:52.619 EAL: No shared files mode enabled, IPC is disabled 00:05:52.619 EAL: Heap on socket 0 was shrunk by 514MB 00:05:52.619 EAL: Trying to obtain current memory policy. 00:05:52.619 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.878 EAL: Restoring previous memory policy: 4 00:05:52.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.878 EAL: request: mp_malloc_sync 00:05:52.878 EAL: No shared files mode enabled, IPC is disabled 00:05:52.878 EAL: Heap on socket 0 was expanded by 1026MB 00:05:52.878 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.136 passed 00:05:53.136 00:05:53.136 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.136 suites 1 1 n/a 0 0 00:05:53.136 tests 2 2 2 0 0 00:05:53.136 asserts 5309 5309 5309 0 n/a 00:05:53.136 00:05:53.136 Elapsed time = 0.663 seconds 00:05:53.136 EAL: request: mp_malloc_sync 00:05:53.136 EAL: No shared files mode enabled, IPC is disabled 00:05:53.136 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:53.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.136 EAL: request: mp_malloc_sync 00:05:53.136 EAL: No shared files mode enabled, IPC is disabled 00:05:53.136 EAL: Heap on socket 0 was shrunk by 2MB 00:05:53.136 EAL: No shared files mode enabled, IPC is disabled 00:05:53.136 EAL: No shared files mode enabled, IPC is disabled 00:05:53.136 EAL: No shared files mode enabled, IPC is disabled 00:05:53.136 00:05:53.136 real 0m0.871s 00:05:53.136 user 0m0.440s 00:05:53.136 sys 0m0.301s 00:05:53.136 10:54:58 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.136 ************************************ 00:05:53.136 END TEST env_vtophys 00:05:53.136 ************************************ 00:05:53.136 10:54:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:53.136 10:54:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:53.136 10:54:58 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.136 10:54:58 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.136 10:54:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.136 ************************************ 00:05:53.136 START TEST env_pci 00:05:53.136 ************************************ 00:05:53.136 10:54:58 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:53.136 00:05:53.136 00:05:53.136 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.136 http://cunit.sourceforge.net/ 00:05:53.136 00:05:53.136 00:05:53.136 Suite: pci 00:05:53.136 Test: pci_hook ...[2024-10-29 10:54:58.530652] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69836 has claimed it 00:05:53.136 passed 00:05:53.136 00:05:53.136 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.136 suites 1 1 n/a 0 0 00:05:53.136 tests 1 1 1 0 0 00:05:53.136 asserts 25 25 25 0 n/a 00:05:53.136 00:05:53.136 Elapsed time = 0.002EAL: Cannot find device (10000:00:01.0) 00:05:53.136 EAL: Failed to attach device on primary process 00:05:53.136 seconds 00:05:53.136 00:05:53.136 real 0m0.019s 00:05:53.136 user 0m0.010s 00:05:53.136 sys 0m0.009s 00:05:53.136 10:54:58 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.136 ************************************ 00:05:53.136 10:54:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:53.136 END TEST env_pci 00:05:53.136 ************************************ 00:05:53.136 10:54:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:53.136 10:54:58 env -- env/env.sh@15 -- # uname 00:05:53.136 10:54:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:53.136 10:54:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:53.136 10:54:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.136 10:54:58 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:53.136 10:54:58 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.136 10:54:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.136 ************************************ 00:05:53.136 START TEST env_dpdk_post_init 00:05:53.136 ************************************ 00:05:53.136 10:54:58 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.136 EAL: Detected CPU lcores: 10 00:05:53.136 EAL: Detected NUMA nodes: 1 00:05:53.136 EAL: Detected shared linkage of DPDK 00:05:53.136 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.136 EAL: Selected IOVA mode 'PA' 00:05:53.396 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.396 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:53.396 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:53.396 Starting DPDK initialization... 00:05:53.396 Starting SPDK post initialization... 00:05:53.396 SPDK NVMe probe 00:05:53.396 Attaching to 0000:00:10.0 00:05:53.396 Attaching to 0000:00:11.0 00:05:53.396 Attached to 0000:00:10.0 00:05:53.396 Attached to 0000:00:11.0 00:05:53.396 Cleaning up... 00:05:53.396 00:05:53.396 real 0m0.190s 00:05:53.396 user 0m0.053s 00:05:53.396 sys 0m0.036s 00:05:53.396 ************************************ 00:05:53.396 END TEST env_dpdk_post_init 00:05:53.396 ************************************ 00:05:53.396 10:54:58 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.396 10:54:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.396 10:54:58 env -- env/env.sh@26 -- # uname 00:05:53.396 10:54:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:53.396 10:54:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.396 10:54:58 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.396 10:54:58 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.396 10:54:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.396 ************************************ 00:05:53.396 START TEST env_mem_callbacks 00:05:53.396 ************************************ 00:05:53.396 10:54:58 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.396 EAL: Detected CPU lcores: 10 00:05:53.396 EAL: Detected NUMA nodes: 1 00:05:53.396 EAL: Detected shared linkage of DPDK 00:05:53.396 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.396 EAL: Selected IOVA mode 'PA' 00:05:53.655 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.655 00:05:53.655 00:05:53.655 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.655 http://cunit.sourceforge.net/ 00:05:53.655 00:05:53.655 00:05:53.655 Suite: memory 00:05:53.655 Test: test ... 00:05:53.655 register 0x200000200000 2097152 00:05:53.656 malloc 3145728 00:05:53.656 register 0x200000400000 4194304 00:05:53.656 buf 0x200000500000 len 3145728 PASSED 00:05:53.656 malloc 64 00:05:53.656 buf 0x2000004fff40 len 64 PASSED 00:05:53.656 malloc 4194304 00:05:53.656 register 0x200000800000 6291456 00:05:53.656 buf 0x200000a00000 len 4194304 PASSED 00:05:53.656 free 0x200000500000 3145728 00:05:53.656 free 0x2000004fff40 64 00:05:53.656 unregister 0x200000400000 4194304 PASSED 00:05:53.656 free 0x200000a00000 4194304 00:05:53.656 unregister 0x200000800000 6291456 PASSED 00:05:53.656 malloc 8388608 00:05:53.656 register 0x200000400000 10485760 00:05:53.656 buf 0x200000600000 len 8388608 PASSED 00:05:53.656 free 0x200000600000 8388608 00:05:53.656 unregister 0x200000400000 10485760 PASSED 00:05:53.656 passed 00:05:53.656 00:05:53.656 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.656 suites 1 1 n/a 0 0 00:05:53.656 tests 1 1 1 0 0 00:05:53.656 asserts 15 15 15 0 n/a 00:05:53.656 00:05:53.656 Elapsed time = 0.006 seconds 00:05:53.656 00:05:53.656 real 0m0.140s 00:05:53.656 user 0m0.019s 00:05:53.656 sys 0m0.020s 00:05:53.656 10:54:58 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.656 10:54:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:53.656 ************************************ 00:05:53.656 END TEST env_mem_callbacks 00:05:53.656 ************************************ 00:05:53.656 00:05:53.656 real 0m1.954s 00:05:53.656 user 0m0.987s 00:05:53.656 sys 0m0.621s 00:05:53.656 10:54:59 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.656 ************************************ 00:05:53.656 END TEST env 00:05:53.656 ************************************ 00:05:53.656 10:54:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.656 10:54:59 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:53.656 10:54:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:53.656 10:54:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.656 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:05:53.656 ************************************ 00:05:53.656 START TEST rpc 00:05:53.656 ************************************ 00:05:53.656 10:54:59 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:53.656 * Looking for test storage... 00:05:53.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.656 10:54:59 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.656 10:54:59 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.656 10:54:59 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.915 10:54:59 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.915 10:54:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.915 10:54:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.916 10:54:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.916 10:54:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.916 10:54:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.916 10:54:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.916 10:54:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.916 10:54:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.916 10:54:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.916 10:54:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.916 10:54:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.916 10:54:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.916 10:54:59 rpc -- scripts/common.sh@345 -- # : 1 00:05:53.916 10:54:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.916 10:54:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.916 10:54:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.916 10:54:59 rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.916 10:54:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.916 10:54:59 rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.916 10:54:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.916 10:54:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.916 10:54:59 rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.916 10:54:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.916 10:54:59 rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.916 10:54:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.916 10:54:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.916 10:54:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.916 10:54:59 rpc -- scripts/common.sh@368 -- # return 0 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.916 --rc genhtml_branch_coverage=1 00:05:53.916 --rc genhtml_function_coverage=1 00:05:53.916 --rc genhtml_legend=1 00:05:53.916 --rc geninfo_all_blocks=1 00:05:53.916 --rc geninfo_unexecuted_blocks=1 00:05:53.916 00:05:53.916 ' 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.916 --rc genhtml_branch_coverage=1 00:05:53.916 --rc genhtml_function_coverage=1 00:05:53.916 --rc genhtml_legend=1 00:05:53.916 --rc geninfo_all_blocks=1 00:05:53.916 --rc geninfo_unexecuted_blocks=1 00:05:53.916 00:05:53.916 ' 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.916 --rc genhtml_branch_coverage=1 00:05:53.916 --rc genhtml_function_coverage=1 00:05:53.916 --rc genhtml_legend=1 00:05:53.916 --rc geninfo_all_blocks=1 00:05:53.916 --rc geninfo_unexecuted_blocks=1 00:05:53.916 00:05:53.916 ' 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.916 --rc genhtml_branch_coverage=1 00:05:53.916 --rc genhtml_function_coverage=1 00:05:53.916 --rc genhtml_legend=1 00:05:53.916 --rc geninfo_all_blocks=1 00:05:53.916 --rc geninfo_unexecuted_blocks=1 00:05:53.916 00:05:53.916 ' 00:05:53.916 10:54:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69959 00:05:53.916 10:54:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:53.916 10:54:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.916 10:54:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69959 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@833 -- # '[' -z 69959 ']' 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.916 10:54:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.916 [2024-10-29 10:54:59.331448] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:05:53.916 [2024-10-29 10:54:59.331561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69959 ] 00:05:54.177 [2024-10-29 10:54:59.478280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.177 [2024-10-29 10:54:59.496996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:54.177 [2024-10-29 10:54:59.497067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69959' to capture a snapshot of events at runtime. 00:05:54.177 [2024-10-29 10:54:59.497092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.177 [2024-10-29 10:54:59.497099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.177 [2024-10-29 10:54:59.497104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69959 for offline analysis/debug. 00:05:54.177 [2024-10-29 10:54:59.497420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.177 [2024-10-29 10:54:59.531155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.177 10:54:59 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:54.177 10:54:59 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:54.177 10:54:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.177 10:54:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.177 10:54:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:54.177 10:54:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:54.177 10:54:59 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.177 10:54:59 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.177 10:54:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.177 ************************************ 00:05:54.177 START TEST rpc_integrity 00:05:54.177 ************************************ 00:05:54.177 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:54.177 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.177 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.177 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.177 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.177 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.443 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.443 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.443 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.443 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.443 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.443 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.443 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:54.443 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.443 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.443 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.443 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.443 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.443 { 00:05:54.443 "name": "Malloc0", 00:05:54.443 "aliases": [ 00:05:54.443 "9de32842-ed5a-4318-a1d3-0760607d2832" 00:05:54.443 ], 00:05:54.443 "product_name": "Malloc disk", 00:05:54.443 "block_size": 512, 00:05:54.443 "num_blocks": 16384, 00:05:54.443 "uuid": "9de32842-ed5a-4318-a1d3-0760607d2832", 00:05:54.443 "assigned_rate_limits": { 00:05:54.443 "rw_ios_per_sec": 0, 00:05:54.443 "rw_mbytes_per_sec": 0, 00:05:54.443 "r_mbytes_per_sec": 0, 00:05:54.443 "w_mbytes_per_sec": 0 00:05:54.443 }, 00:05:54.443 "claimed": false, 00:05:54.443 "zoned": false, 00:05:54.443 "supported_io_types": { 00:05:54.443 "read": true, 00:05:54.443 "write": true, 00:05:54.443 "unmap": true, 00:05:54.443 "flush": true, 00:05:54.443 "reset": true, 00:05:54.443 "nvme_admin": false, 00:05:54.443 "nvme_io": false, 00:05:54.443 "nvme_io_md": false, 00:05:54.443 "write_zeroes": true, 00:05:54.443 "zcopy": true, 00:05:54.443 "get_zone_info": false, 00:05:54.443 "zone_management": false, 00:05:54.443 "zone_append": false, 00:05:54.443 "compare": false, 00:05:54.443 "compare_and_write": false, 00:05:54.443 "abort": true, 00:05:54.443 "seek_hole": false, 00:05:54.443 "seek_data": false, 00:05:54.443 "copy": true, 00:05:54.443 "nvme_iov_md": false 00:05:54.443 }, 00:05:54.443 "memory_domains": [ 00:05:54.443 { 00:05:54.443 "dma_device_id": "system", 00:05:54.443 "dma_device_type": 1 00:05:54.443 }, 00:05:54.443 { 00:05:54.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.443 "dma_device_type": 2 00:05:54.443 } 00:05:54.443 ], 00:05:54.443 "driver_specific": {} 00:05:54.443 } 00:05:54.443 ]' 00:05:54.443 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.443 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.443 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:54.443 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.443 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.443 [2024-10-29 10:54:59.818740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:54.443 [2024-10-29 10:54:59.818844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.443 [2024-10-29 10:54:59.818860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24a0030 00:05:54.443 [2024-10-29 10:54:59.818884] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.443 [2024-10-29 10:54:59.820461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.444 [2024-10-29 10:54:59.820512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.444 Passthru0 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.444 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.444 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.444 { 00:05:54.444 "name": "Malloc0", 00:05:54.444 "aliases": [ 00:05:54.444 "9de32842-ed5a-4318-a1d3-0760607d2832" 00:05:54.444 ], 00:05:54.444 "product_name": "Malloc disk", 00:05:54.444 "block_size": 512, 00:05:54.444 "num_blocks": 16384, 00:05:54.444 "uuid": "9de32842-ed5a-4318-a1d3-0760607d2832", 00:05:54.444 "assigned_rate_limits": { 00:05:54.444 "rw_ios_per_sec": 0, 00:05:54.444 "rw_mbytes_per_sec": 0, 00:05:54.444 "r_mbytes_per_sec": 0, 00:05:54.444 "w_mbytes_per_sec": 0 00:05:54.444 }, 00:05:54.444 "claimed": true, 00:05:54.444 "claim_type": "exclusive_write", 00:05:54.444 "zoned": false, 00:05:54.444 "supported_io_types": { 00:05:54.444 "read": true, 00:05:54.444 "write": true, 00:05:54.444 "unmap": true, 00:05:54.444 "flush": true, 00:05:54.444 "reset": true, 00:05:54.444 "nvme_admin": false, 00:05:54.444 "nvme_io": false, 00:05:54.444 "nvme_io_md": false, 00:05:54.444 "write_zeroes": true, 00:05:54.444 "zcopy": true, 00:05:54.444 "get_zone_info": false, 00:05:54.444 "zone_management": false, 00:05:54.444 "zone_append": false, 00:05:54.444 "compare": false, 00:05:54.444 "compare_and_write": false, 00:05:54.444 "abort": true, 00:05:54.444 "seek_hole": false, 00:05:54.444 "seek_data": false, 00:05:54.444 "copy": true, 00:05:54.444 "nvme_iov_md": false 00:05:54.444 }, 00:05:54.444 "memory_domains": [ 00:05:54.444 { 00:05:54.444 "dma_device_id": "system", 00:05:54.444 "dma_device_type": 1 00:05:54.444 }, 00:05:54.444 { 00:05:54.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.444 "dma_device_type": 2 00:05:54.444 } 00:05:54.444 ], 00:05:54.444 "driver_specific": {} 00:05:54.444 }, 00:05:54.444 { 00:05:54.444 "name": "Passthru0", 00:05:54.444 "aliases": [ 00:05:54.444 "410d488f-fcb4-5a88-bec4-e7b2cced2fb8" 00:05:54.444 ], 00:05:54.444 "product_name": "passthru", 00:05:54.444 "block_size": 512, 00:05:54.444 "num_blocks": 16384, 00:05:54.444 "uuid": "410d488f-fcb4-5a88-bec4-e7b2cced2fb8", 00:05:54.444 "assigned_rate_limits": { 00:05:54.444 "rw_ios_per_sec": 0, 00:05:54.444 "rw_mbytes_per_sec": 0, 00:05:54.444 "r_mbytes_per_sec": 0, 00:05:54.444 "w_mbytes_per_sec": 0 00:05:54.444 }, 00:05:54.444 "claimed": false, 00:05:54.444 "zoned": false, 00:05:54.444 "supported_io_types": { 00:05:54.444 "read": true, 00:05:54.444 "write": true, 00:05:54.444 "unmap": true, 00:05:54.444 "flush": true, 00:05:54.444 "reset": true, 00:05:54.444 "nvme_admin": false, 00:05:54.444 "nvme_io": false, 00:05:54.444 "nvme_io_md": false, 00:05:54.444 "write_zeroes": true, 00:05:54.444 "zcopy": true, 00:05:54.444 "get_zone_info": false, 00:05:54.444 "zone_management": false, 00:05:54.444 "zone_append": false, 00:05:54.444 "compare": false, 00:05:54.444 "compare_and_write": false, 00:05:54.444 "abort": true, 00:05:54.444 "seek_hole": false, 00:05:54.444 "seek_data": false, 00:05:54.444 "copy": true, 00:05:54.444 "nvme_iov_md": false 00:05:54.444 }, 00:05:54.444 "memory_domains": [ 00:05:54.444 { 00:05:54.444 "dma_device_id": "system", 00:05:54.444 "dma_device_type": 1 00:05:54.444 }, 00:05:54.444 { 00:05:54.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.444 "dma_device_type": 2 00:05:54.444 } 00:05:54.444 ], 00:05:54.444 "driver_specific": { 00:05:54.444 "passthru": { 00:05:54.444 "name": "Passthru0", 00:05:54.444 "base_bdev_name": "Malloc0" 00:05:54.444 } 00:05:54.444 } 00:05:54.444 } 00:05:54.444 ]' 00:05:54.444 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.444 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.444 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.444 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.444 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.444 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.444 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:54.444 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:54.702 10:54:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:54.702 00:05:54.702 real 0m0.326s 00:05:54.702 user 0m0.227s 00:05:54.702 sys 0m0.035s 00:05:54.702 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.702 10:54:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.702 ************************************ 00:05:54.702 END TEST rpc_integrity 00:05:54.702 ************************************ 00:05:54.702 10:55:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:54.702 10:55:00 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.702 10:55:00 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.702 10:55:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.702 ************************************ 00:05:54.702 START TEST rpc_plugins 00:05:54.702 ************************************ 00:05:54.702 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:54.702 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:54.702 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.702 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.702 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.702 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:54.702 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:54.702 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.702 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.702 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.702 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:54.702 { 00:05:54.702 "name": "Malloc1", 00:05:54.702 "aliases": [ 00:05:54.702 "2897a889-08e8-4587-9908-65d33ff34504" 00:05:54.702 ], 00:05:54.702 "product_name": "Malloc disk", 00:05:54.702 "block_size": 4096, 00:05:54.702 "num_blocks": 256, 00:05:54.702 "uuid": "2897a889-08e8-4587-9908-65d33ff34504", 00:05:54.702 "assigned_rate_limits": { 00:05:54.702 "rw_ios_per_sec": 0, 00:05:54.702 "rw_mbytes_per_sec": 0, 00:05:54.702 "r_mbytes_per_sec": 0, 00:05:54.702 "w_mbytes_per_sec": 0 00:05:54.702 }, 00:05:54.702 "claimed": false, 00:05:54.702 "zoned": false, 00:05:54.702 "supported_io_types": { 00:05:54.702 "read": true, 00:05:54.702 "write": true, 00:05:54.702 "unmap": true, 00:05:54.702 "flush": true, 00:05:54.702 "reset": true, 00:05:54.702 "nvme_admin": false, 00:05:54.702 "nvme_io": false, 00:05:54.702 "nvme_io_md": false, 00:05:54.702 "write_zeroes": true, 00:05:54.702 "zcopy": true, 00:05:54.702 "get_zone_info": false, 00:05:54.702 "zone_management": false, 00:05:54.702 "zone_append": false, 00:05:54.702 "compare": false, 00:05:54.702 "compare_and_write": false, 00:05:54.702 "abort": true, 00:05:54.702 "seek_hole": false, 00:05:54.702 "seek_data": false, 00:05:54.702 "copy": true, 00:05:54.702 "nvme_iov_md": false 00:05:54.702 }, 00:05:54.702 "memory_domains": [ 00:05:54.702 { 00:05:54.703 "dma_device_id": "system", 00:05:54.703 "dma_device_type": 1 00:05:54.703 }, 00:05:54.703 { 00:05:54.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.703 "dma_device_type": 2 00:05:54.703 } 00:05:54.703 ], 00:05:54.703 "driver_specific": {} 00:05:54.703 } 00:05:54.703 ]' 00:05:54.703 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:54.703 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:54.703 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:54.703 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.703 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.703 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.703 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:54.703 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.703 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.703 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.703 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:54.703 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:54.703 10:55:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:54.703 00:05:54.703 real 0m0.157s 00:05:54.703 user 0m0.104s 00:05:54.703 sys 0m0.019s 00:05:54.703 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.703 10:55:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:54.703 ************************************ 00:05:54.703 END TEST rpc_plugins 00:05:54.703 ************************************ 00:05:54.962 10:55:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:54.962 10:55:00 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:54.962 10:55:00 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.962 10:55:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.962 ************************************ 00:05:54.962 START TEST rpc_trace_cmd_test 00:05:54.962 ************************************ 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:54.962 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69959", 00:05:54.962 "tpoint_group_mask": "0x8", 00:05:54.962 "iscsi_conn": { 00:05:54.962 "mask": "0x2", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "scsi": { 00:05:54.962 "mask": "0x4", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "bdev": { 00:05:54.962 "mask": "0x8", 00:05:54.962 "tpoint_mask": "0xffffffffffffffff" 00:05:54.962 }, 00:05:54.962 "nvmf_rdma": { 00:05:54.962 "mask": "0x10", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "nvmf_tcp": { 00:05:54.962 "mask": "0x20", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "ftl": { 00:05:54.962 "mask": "0x40", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "blobfs": { 00:05:54.962 "mask": "0x80", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "dsa": { 00:05:54.962 "mask": "0x200", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "thread": { 00:05:54.962 "mask": "0x400", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "nvme_pcie": { 00:05:54.962 "mask": "0x800", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "iaa": { 00:05:54.962 "mask": "0x1000", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "nvme_tcp": { 00:05:54.962 "mask": "0x2000", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "bdev_nvme": { 00:05:54.962 "mask": "0x4000", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "sock": { 00:05:54.962 "mask": "0x8000", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "blob": { 00:05:54.962 "mask": "0x10000", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "bdev_raid": { 00:05:54.962 "mask": "0x20000", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 }, 00:05:54.962 "scheduler": { 00:05:54.962 "mask": "0x40000", 00:05:54.962 "tpoint_mask": "0x0" 00:05:54.962 } 00:05:54.962 }' 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:54.962 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:55.221 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:55.221 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:55.221 ************************************ 00:05:55.221 END TEST rpc_trace_cmd_test 00:05:55.221 ************************************ 00:05:55.221 10:55:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:55.221 00:05:55.221 real 0m0.277s 00:05:55.221 user 0m0.245s 00:05:55.221 sys 0m0.024s 00:05:55.221 10:55:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.221 10:55:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.221 10:55:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:55.221 10:55:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:55.221 10:55:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:55.221 10:55:00 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.221 10:55:00 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.221 10:55:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.221 ************************************ 00:05:55.221 START TEST rpc_daemon_integrity 00:05:55.221 ************************************ 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:55.221 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.222 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.222 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.222 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.222 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.222 { 00:05:55.222 "name": "Malloc2", 00:05:55.222 "aliases": [ 00:05:55.222 "02921f37-a303-4005-ad19-48416ef135b5" 00:05:55.222 ], 00:05:55.222 "product_name": "Malloc disk", 00:05:55.222 "block_size": 512, 00:05:55.222 "num_blocks": 16384, 00:05:55.222 "uuid": "02921f37-a303-4005-ad19-48416ef135b5", 00:05:55.222 "assigned_rate_limits": { 00:05:55.222 "rw_ios_per_sec": 0, 00:05:55.222 "rw_mbytes_per_sec": 0, 00:05:55.222 "r_mbytes_per_sec": 0, 00:05:55.222 "w_mbytes_per_sec": 0 00:05:55.222 }, 00:05:55.222 "claimed": false, 00:05:55.222 "zoned": false, 00:05:55.222 "supported_io_types": { 00:05:55.222 "read": true, 00:05:55.222 "write": true, 00:05:55.222 "unmap": true, 00:05:55.222 "flush": true, 00:05:55.222 "reset": true, 00:05:55.222 "nvme_admin": false, 00:05:55.222 "nvme_io": false, 00:05:55.222 "nvme_io_md": false, 00:05:55.222 "write_zeroes": true, 00:05:55.222 "zcopy": true, 00:05:55.222 "get_zone_info": false, 00:05:55.222 "zone_management": false, 00:05:55.222 "zone_append": false, 00:05:55.222 "compare": false, 00:05:55.222 "compare_and_write": false, 00:05:55.222 "abort": true, 00:05:55.222 "seek_hole": false, 00:05:55.222 "seek_data": false, 00:05:55.222 "copy": true, 00:05:55.222 "nvme_iov_md": false 00:05:55.222 }, 00:05:55.222 "memory_domains": [ 00:05:55.222 { 00:05:55.222 "dma_device_id": "system", 00:05:55.222 "dma_device_type": 1 00:05:55.222 }, 00:05:55.222 { 00:05:55.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.222 "dma_device_type": 2 00:05:55.222 } 00:05:55.222 ], 00:05:55.222 "driver_specific": {} 00:05:55.222 } 00:05:55.222 ]' 00:05:55.222 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.222 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.222 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:55.222 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.222 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.481 [2024-10-29 10:55:00.723092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:55.481 [2024-10-29 10:55:00.723152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.481 [2024-10-29 10:55:00.723184] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24a3ec0 00:05:55.481 [2024-10-29 10:55:00.723193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.481 [2024-10-29 10:55:00.724847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.481 [2024-10-29 10:55:00.724879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.481 Passthru0 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.481 { 00:05:55.481 "name": "Malloc2", 00:05:55.481 "aliases": [ 00:05:55.481 "02921f37-a303-4005-ad19-48416ef135b5" 00:05:55.481 ], 00:05:55.481 "product_name": "Malloc disk", 00:05:55.481 "block_size": 512, 00:05:55.481 "num_blocks": 16384, 00:05:55.481 "uuid": "02921f37-a303-4005-ad19-48416ef135b5", 00:05:55.481 "assigned_rate_limits": { 00:05:55.481 "rw_ios_per_sec": 0, 00:05:55.481 "rw_mbytes_per_sec": 0, 00:05:55.481 "r_mbytes_per_sec": 0, 00:05:55.481 "w_mbytes_per_sec": 0 00:05:55.481 }, 00:05:55.481 "claimed": true, 00:05:55.481 "claim_type": "exclusive_write", 00:05:55.481 "zoned": false, 00:05:55.481 "supported_io_types": { 00:05:55.481 "read": true, 00:05:55.481 "write": true, 00:05:55.481 "unmap": true, 00:05:55.481 "flush": true, 00:05:55.481 "reset": true, 00:05:55.481 "nvme_admin": false, 00:05:55.481 "nvme_io": false, 00:05:55.481 "nvme_io_md": false, 00:05:55.481 "write_zeroes": true, 00:05:55.481 "zcopy": true, 00:05:55.481 "get_zone_info": false, 00:05:55.481 "zone_management": false, 00:05:55.481 "zone_append": false, 00:05:55.481 "compare": false, 00:05:55.481 "compare_and_write": false, 00:05:55.481 "abort": true, 00:05:55.481 "seek_hole": false, 00:05:55.481 "seek_data": false, 00:05:55.481 "copy": true, 00:05:55.481 "nvme_iov_md": false 00:05:55.481 }, 00:05:55.481 "memory_domains": [ 00:05:55.481 { 00:05:55.481 "dma_device_id": "system", 00:05:55.481 "dma_device_type": 1 00:05:55.481 }, 00:05:55.481 { 00:05:55.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.481 "dma_device_type": 2 00:05:55.481 } 00:05:55.481 ], 00:05:55.481 "driver_specific": {} 00:05:55.481 }, 00:05:55.481 { 00:05:55.481 "name": "Passthru0", 00:05:55.481 "aliases": [ 00:05:55.481 "8530ed6d-bcad-5308-9767-a92627229eac" 00:05:55.481 ], 00:05:55.481 "product_name": "passthru", 00:05:55.481 "block_size": 512, 00:05:55.481 "num_blocks": 16384, 00:05:55.481 "uuid": "8530ed6d-bcad-5308-9767-a92627229eac", 00:05:55.481 "assigned_rate_limits": { 00:05:55.481 "rw_ios_per_sec": 0, 00:05:55.481 "rw_mbytes_per_sec": 0, 00:05:55.481 "r_mbytes_per_sec": 0, 00:05:55.481 "w_mbytes_per_sec": 0 00:05:55.481 }, 00:05:55.481 "claimed": false, 00:05:55.481 "zoned": false, 00:05:55.481 "supported_io_types": { 00:05:55.481 "read": true, 00:05:55.481 "write": true, 00:05:55.481 "unmap": true, 00:05:55.481 "flush": true, 00:05:55.481 "reset": true, 00:05:55.481 "nvme_admin": false, 00:05:55.481 "nvme_io": false, 00:05:55.481 "nvme_io_md": false, 00:05:55.481 "write_zeroes": true, 00:05:55.481 "zcopy": true, 00:05:55.481 "get_zone_info": false, 00:05:55.481 "zone_management": false, 00:05:55.481 "zone_append": false, 00:05:55.481 "compare": false, 00:05:55.481 "compare_and_write": false, 00:05:55.481 "abort": true, 00:05:55.481 "seek_hole": false, 00:05:55.481 "seek_data": false, 00:05:55.481 "copy": true, 00:05:55.481 "nvme_iov_md": false 00:05:55.481 }, 00:05:55.481 "memory_domains": [ 00:05:55.481 { 00:05:55.481 "dma_device_id": "system", 00:05:55.481 "dma_device_type": 1 00:05:55.481 }, 00:05:55.481 { 00:05:55.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.481 "dma_device_type": 2 00:05:55.481 } 00:05:55.481 ], 00:05:55.481 "driver_specific": { 00:05:55.481 "passthru": { 00:05:55.481 "name": "Passthru0", 00:05:55.481 "base_bdev_name": "Malloc2" 00:05:55.481 } 00:05:55.481 } 00:05:55.481 } 00:05:55.481 ]' 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.481 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.482 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.482 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.482 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.482 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.482 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.482 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.482 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:55.482 ************************************ 00:05:55.482 END TEST rpc_daemon_integrity 00:05:55.482 ************************************ 00:05:55.482 10:55:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.482 00:05:55.482 real 0m0.323s 00:05:55.482 user 0m0.218s 00:05:55.482 sys 0m0.043s 00:05:55.482 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.482 10:55:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.482 10:55:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:55.482 10:55:00 rpc -- rpc/rpc.sh@84 -- # killprocess 69959 00:05:55.482 10:55:00 rpc -- common/autotest_common.sh@952 -- # '[' -z 69959 ']' 00:05:55.482 10:55:00 rpc -- common/autotest_common.sh@956 -- # kill -0 69959 00:05:55.482 10:55:00 rpc -- common/autotest_common.sh@957 -- # uname 00:05:55.482 10:55:00 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:55.482 10:55:00 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69959 00:05:55.482 killing process with pid 69959 00:05:55.482 10:55:00 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:55.482 10:55:00 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:55.482 10:55:00 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69959' 00:05:55.482 10:55:00 rpc -- common/autotest_common.sh@971 -- # kill 69959 00:05:55.482 10:55:00 rpc -- common/autotest_common.sh@976 -- # wait 69959 00:05:55.741 00:05:55.741 real 0m2.115s 00:05:55.741 user 0m2.921s 00:05:55.741 sys 0m0.529s 00:05:55.741 ************************************ 00:05:55.741 END TEST rpc 00:05:55.741 ************************************ 00:05:55.741 10:55:01 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.741 10:55:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.741 10:55:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:55.741 10:55:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.741 10:55:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.741 10:55:01 -- common/autotest_common.sh@10 -- # set +x 00:05:55.741 ************************************ 00:05:55.741 START TEST skip_rpc 00:05:55.741 ************************************ 00:05:55.741 10:55:01 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:56.000 * Looking for test storage... 00:05:56.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:56.000 10:55:01 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:56.000 10:55:01 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:56.000 10:55:01 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:56.000 10:55:01 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.000 10:55:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:56.000 10:55:01 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.000 10:55:01 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:56.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.000 --rc genhtml_branch_coverage=1 00:05:56.000 --rc genhtml_function_coverage=1 00:05:56.000 --rc genhtml_legend=1 00:05:56.000 --rc geninfo_all_blocks=1 00:05:56.000 --rc geninfo_unexecuted_blocks=1 00:05:56.000 00:05:56.000 ' 00:05:56.000 10:55:01 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:56.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.000 --rc genhtml_branch_coverage=1 00:05:56.000 --rc genhtml_function_coverage=1 00:05:56.000 --rc genhtml_legend=1 00:05:56.000 --rc geninfo_all_blocks=1 00:05:56.001 --rc geninfo_unexecuted_blocks=1 00:05:56.001 00:05:56.001 ' 00:05:56.001 10:55:01 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:56.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.001 --rc genhtml_branch_coverage=1 00:05:56.001 --rc genhtml_function_coverage=1 00:05:56.001 --rc genhtml_legend=1 00:05:56.001 --rc geninfo_all_blocks=1 00:05:56.001 --rc geninfo_unexecuted_blocks=1 00:05:56.001 00:05:56.001 ' 00:05:56.001 10:55:01 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:56.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.001 --rc genhtml_branch_coverage=1 00:05:56.001 --rc genhtml_function_coverage=1 00:05:56.001 --rc genhtml_legend=1 00:05:56.001 --rc geninfo_all_blocks=1 00:05:56.001 --rc geninfo_unexecuted_blocks=1 00:05:56.001 00:05:56.001 ' 00:05:56.001 10:55:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.001 10:55:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:56.001 10:55:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:56.001 10:55:01 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.001 10:55:01 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.001 10:55:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.001 ************************************ 00:05:56.001 START TEST skip_rpc 00:05:56.001 ************************************ 00:05:56.001 10:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:56.001 10:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70152 00:05:56.001 10:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.001 10:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:56.001 10:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:56.260 [2024-10-29 10:55:01.501643] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:05:56.260 [2024-10-29 10:55:01.501910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70152 ] 00:05:56.260 [2024-10-29 10:55:01.642516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.260 [2024-10-29 10:55:01.663235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.260 [2024-10-29 10:55:01.698333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70152 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 70152 ']' 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 70152 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70152 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70152' 00:06:01.535 killing process with pid 70152 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 70152 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 70152 00:06:01.535 00:06:01.535 real 0m5.261s 00:06:01.535 user 0m5.000s 00:06:01.535 sys 0m0.180s 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.535 10:55:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.535 ************************************ 00:06:01.535 END TEST skip_rpc 00:06:01.535 ************************************ 00:06:01.535 10:55:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:01.535 10:55:06 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.535 10:55:06 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.535 10:55:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.535 ************************************ 00:06:01.535 START TEST skip_rpc_with_json 00:06:01.535 ************************************ 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70233 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70233 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 70233 ']' 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:01.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:01.535 10:55:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.535 [2024-10-29 10:55:06.822094] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:01.535 [2024-10-29 10:55:06.822198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70233 ] 00:06:01.535 [2024-10-29 10:55:06.970148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.535 [2024-10-29 10:55:06.992722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.535 [2024-10-29 10:55:07.031431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.795 [2024-10-29 10:55:07.156221] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:01.795 request: 00:06:01.795 { 00:06:01.795 "trtype": "tcp", 00:06:01.795 "method": "nvmf_get_transports", 00:06:01.795 "req_id": 1 00:06:01.795 } 00:06:01.795 Got JSON-RPC error response 00:06:01.795 response: 00:06:01.795 { 00:06:01.795 "code": -19, 00:06:01.795 "message": "No such device" 00:06:01.795 } 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.795 [2024-10-29 10:55:07.168307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.795 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.054 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.055 10:55:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.055 { 00:06:02.055 "subsystems": [ 00:06:02.055 { 00:06:02.055 "subsystem": "fsdev", 00:06:02.055 "config": [ 00:06:02.055 { 00:06:02.055 "method": "fsdev_set_opts", 00:06:02.055 "params": { 00:06:02.055 "fsdev_io_pool_size": 65535, 00:06:02.055 "fsdev_io_cache_size": 256 00:06:02.055 } 00:06:02.055 } 00:06:02.055 ] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "vfio_user_target", 00:06:02.055 "config": null 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "keyring", 00:06:02.055 "config": [] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "iobuf", 00:06:02.055 "config": [ 00:06:02.055 { 00:06:02.055 "method": "iobuf_set_options", 00:06:02.055 "params": { 00:06:02.055 "small_pool_count": 8192, 00:06:02.055 "large_pool_count": 1024, 00:06:02.055 "small_bufsize": 8192, 00:06:02.055 "large_bufsize": 135168, 00:06:02.055 "enable_numa": false 00:06:02.055 } 00:06:02.055 } 00:06:02.055 ] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "sock", 00:06:02.055 "config": [ 00:06:02.055 { 00:06:02.055 "method": "sock_set_default_impl", 00:06:02.055 "params": { 00:06:02.055 "impl_name": "uring" 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "sock_impl_set_options", 00:06:02.055 "params": { 00:06:02.055 "impl_name": "ssl", 00:06:02.055 "recv_buf_size": 4096, 00:06:02.055 "send_buf_size": 4096, 00:06:02.055 "enable_recv_pipe": true, 00:06:02.055 "enable_quickack": false, 00:06:02.055 "enable_placement_id": 0, 00:06:02.055 "enable_zerocopy_send_server": true, 00:06:02.055 "enable_zerocopy_send_client": false, 00:06:02.055 "zerocopy_threshold": 0, 00:06:02.055 "tls_version": 0, 00:06:02.055 "enable_ktls": false 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "sock_impl_set_options", 00:06:02.055 "params": { 00:06:02.055 "impl_name": "posix", 00:06:02.055 "recv_buf_size": 2097152, 00:06:02.055 "send_buf_size": 2097152, 00:06:02.055 "enable_recv_pipe": true, 00:06:02.055 "enable_quickack": false, 00:06:02.055 "enable_placement_id": 0, 00:06:02.055 "enable_zerocopy_send_server": true, 00:06:02.055 "enable_zerocopy_send_client": false, 00:06:02.055 "zerocopy_threshold": 0, 00:06:02.055 "tls_version": 0, 00:06:02.055 "enable_ktls": false 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "sock_impl_set_options", 00:06:02.055 "params": { 00:06:02.055 "impl_name": "uring", 00:06:02.055 "recv_buf_size": 2097152, 00:06:02.055 "send_buf_size": 2097152, 00:06:02.055 "enable_recv_pipe": true, 00:06:02.055 "enable_quickack": false, 00:06:02.055 "enable_placement_id": 0, 00:06:02.055 "enable_zerocopy_send_server": false, 00:06:02.055 "enable_zerocopy_send_client": false, 00:06:02.055 "zerocopy_threshold": 0, 00:06:02.055 "tls_version": 0, 00:06:02.055 "enable_ktls": false 00:06:02.055 } 00:06:02.055 } 00:06:02.055 ] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "vmd", 00:06:02.055 "config": [] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "accel", 00:06:02.055 "config": [ 00:06:02.055 { 00:06:02.055 "method": "accel_set_options", 00:06:02.055 "params": { 00:06:02.055 "small_cache_size": 128, 00:06:02.055 "large_cache_size": 16, 00:06:02.055 "task_count": 2048, 00:06:02.055 "sequence_count": 2048, 00:06:02.055 "buf_count": 2048 00:06:02.055 } 00:06:02.055 } 00:06:02.055 ] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "bdev", 00:06:02.055 "config": [ 00:06:02.055 { 00:06:02.055 "method": "bdev_set_options", 00:06:02.055 "params": { 00:06:02.055 "bdev_io_pool_size": 65535, 00:06:02.055 "bdev_io_cache_size": 256, 00:06:02.055 "bdev_auto_examine": true, 00:06:02.055 "iobuf_small_cache_size": 128, 00:06:02.055 "iobuf_large_cache_size": 16 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "bdev_raid_set_options", 00:06:02.055 "params": { 00:06:02.055 "process_window_size_kb": 1024, 00:06:02.055 "process_max_bandwidth_mb_sec": 0 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "bdev_iscsi_set_options", 00:06:02.055 "params": { 00:06:02.055 "timeout_sec": 30 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "bdev_nvme_set_options", 00:06:02.055 "params": { 00:06:02.055 "action_on_timeout": "none", 00:06:02.055 "timeout_us": 0, 00:06:02.055 "timeout_admin_us": 0, 00:06:02.055 "keep_alive_timeout_ms": 10000, 00:06:02.055 "arbitration_burst": 0, 00:06:02.055 "low_priority_weight": 0, 00:06:02.055 "medium_priority_weight": 0, 00:06:02.055 "high_priority_weight": 0, 00:06:02.055 "nvme_adminq_poll_period_us": 10000, 00:06:02.055 "nvme_ioq_poll_period_us": 0, 00:06:02.055 "io_queue_requests": 0, 00:06:02.055 "delay_cmd_submit": true, 00:06:02.055 "transport_retry_count": 4, 00:06:02.055 "bdev_retry_count": 3, 00:06:02.055 "transport_ack_timeout": 0, 00:06:02.055 "ctrlr_loss_timeout_sec": 0, 00:06:02.055 "reconnect_delay_sec": 0, 00:06:02.055 "fast_io_fail_timeout_sec": 0, 00:06:02.055 "disable_auto_failback": false, 00:06:02.055 "generate_uuids": false, 00:06:02.055 "transport_tos": 0, 00:06:02.055 "nvme_error_stat": false, 00:06:02.055 "rdma_srq_size": 0, 00:06:02.055 "io_path_stat": false, 00:06:02.055 "allow_accel_sequence": false, 00:06:02.055 "rdma_max_cq_size": 0, 00:06:02.055 "rdma_cm_event_timeout_ms": 0, 00:06:02.055 "dhchap_digests": [ 00:06:02.055 "sha256", 00:06:02.055 "sha384", 00:06:02.055 "sha512" 00:06:02.055 ], 00:06:02.055 "dhchap_dhgroups": [ 00:06:02.055 "null", 00:06:02.055 "ffdhe2048", 00:06:02.055 "ffdhe3072", 00:06:02.055 "ffdhe4096", 00:06:02.055 "ffdhe6144", 00:06:02.055 "ffdhe8192" 00:06:02.055 ] 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "bdev_nvme_set_hotplug", 00:06:02.055 "params": { 00:06:02.055 "period_us": 100000, 00:06:02.055 "enable": false 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "bdev_wait_for_examine" 00:06:02.055 } 00:06:02.055 ] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "scsi", 00:06:02.055 "config": null 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "scheduler", 00:06:02.055 "config": [ 00:06:02.055 { 00:06:02.055 "method": "framework_set_scheduler", 00:06:02.055 "params": { 00:06:02.055 "name": "static" 00:06:02.055 } 00:06:02.055 } 00:06:02.055 ] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "vhost_scsi", 00:06:02.055 "config": [] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "vhost_blk", 00:06:02.055 "config": [] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "ublk", 00:06:02.055 "config": [] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "nbd", 00:06:02.055 "config": [] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "nvmf", 00:06:02.055 "config": [ 00:06:02.055 { 00:06:02.055 "method": "nvmf_set_config", 00:06:02.055 "params": { 00:06:02.055 "discovery_filter": "match_any", 00:06:02.055 "admin_cmd_passthru": { 00:06:02.055 "identify_ctrlr": false 00:06:02.055 }, 00:06:02.055 "dhchap_digests": [ 00:06:02.055 "sha256", 00:06:02.055 "sha384", 00:06:02.055 "sha512" 00:06:02.055 ], 00:06:02.055 "dhchap_dhgroups": [ 00:06:02.055 "null", 00:06:02.055 "ffdhe2048", 00:06:02.055 "ffdhe3072", 00:06:02.055 "ffdhe4096", 00:06:02.055 "ffdhe6144", 00:06:02.055 "ffdhe8192" 00:06:02.055 ] 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "nvmf_set_max_subsystems", 00:06:02.055 "params": { 00:06:02.055 "max_subsystems": 1024 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "nvmf_set_crdt", 00:06:02.055 "params": { 00:06:02.055 "crdt1": 0, 00:06:02.055 "crdt2": 0, 00:06:02.055 "crdt3": 0 00:06:02.055 } 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "method": "nvmf_create_transport", 00:06:02.055 "params": { 00:06:02.055 "trtype": "TCP", 00:06:02.055 "max_queue_depth": 128, 00:06:02.055 "max_io_qpairs_per_ctrlr": 127, 00:06:02.055 "in_capsule_data_size": 4096, 00:06:02.055 "max_io_size": 131072, 00:06:02.055 "io_unit_size": 131072, 00:06:02.055 "max_aq_depth": 128, 00:06:02.055 "num_shared_buffers": 511, 00:06:02.055 "buf_cache_size": 4294967295, 00:06:02.055 "dif_insert_or_strip": false, 00:06:02.055 "zcopy": false, 00:06:02.055 "c2h_success": true, 00:06:02.055 "sock_priority": 0, 00:06:02.055 "abort_timeout_sec": 1, 00:06:02.055 "ack_timeout": 0, 00:06:02.055 "data_wr_pool_size": 0 00:06:02.055 } 00:06:02.055 } 00:06:02.055 ] 00:06:02.055 }, 00:06:02.055 { 00:06:02.055 "subsystem": "iscsi", 00:06:02.055 "config": [ 00:06:02.055 { 00:06:02.055 "method": "iscsi_set_options", 00:06:02.055 "params": { 00:06:02.055 "node_base": "iqn.2016-06.io.spdk", 00:06:02.055 "max_sessions": 128, 00:06:02.055 "max_connections_per_session": 2, 00:06:02.056 "max_queue_depth": 64, 00:06:02.056 "default_time2wait": 2, 00:06:02.056 "default_time2retain": 20, 00:06:02.056 "first_burst_length": 8192, 00:06:02.056 "immediate_data": true, 00:06:02.056 "allow_duplicated_isid": false, 00:06:02.056 "error_recovery_level": 0, 00:06:02.056 "nop_timeout": 60, 00:06:02.056 "nop_in_interval": 30, 00:06:02.056 "disable_chap": false, 00:06:02.056 "require_chap": false, 00:06:02.056 "mutual_chap": false, 00:06:02.056 "chap_group": 0, 00:06:02.056 "max_large_datain_per_connection": 64, 00:06:02.056 "max_r2t_per_connection": 4, 00:06:02.056 "pdu_pool_size": 36864, 00:06:02.056 "immediate_data_pool_size": 16384, 00:06:02.056 "data_out_pool_size": 2048 00:06:02.056 } 00:06:02.056 } 00:06:02.056 ] 00:06:02.056 } 00:06:02.056 ] 00:06:02.056 } 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70233 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 70233 ']' 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 70233 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70233 00:06:02.056 killing process with pid 70233 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70233' 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 70233 00:06:02.056 10:55:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 70233 00:06:02.314 10:55:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70253 00:06:02.314 10:55:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.314 10:55:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:07.586 10:55:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70253 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 70253 ']' 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 70253 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70253 00:06:07.587 killing process with pid 70253 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70253' 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 70253 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 70253 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:07.587 ************************************ 00:06:07.587 END TEST skip_rpc_with_json 00:06:07.587 ************************************ 00:06:07.587 00:06:07.587 real 0m6.118s 00:06:07.587 user 0m5.883s 00:06:07.587 sys 0m0.411s 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.587 10:55:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:07.587 10:55:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:07.587 10:55:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.587 10:55:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.587 ************************************ 00:06:07.587 START TEST skip_rpc_with_delay 00:06:07.587 ************************************ 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:07.587 10:55:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:07.587 [2024-10-29 10:55:12.991435] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:07.587 10:55:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:07.587 10:55:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.587 ************************************ 00:06:07.587 END TEST skip_rpc_with_delay 00:06:07.587 ************************************ 00:06:07.587 10:55:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.587 10:55:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.587 00:06:07.587 real 0m0.091s 00:06:07.587 user 0m0.059s 00:06:07.587 sys 0m0.031s 00:06:07.587 10:55:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.587 10:55:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:07.587 10:55:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:07.587 10:55:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:07.587 10:55:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:07.587 10:55:13 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:07.587 10:55:13 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:07.587 10:55:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.587 ************************************ 00:06:07.587 START TEST exit_on_failed_rpc_init 00:06:07.587 ************************************ 00:06:07.587 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:07.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.587 10:55:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70357 00:06:07.587 10:55:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70357 00:06:07.587 10:55:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.587 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 70357 ']' 00:06:07.587 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.587 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.587 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.587 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.587 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:07.846 [2024-10-29 10:55:13.132267] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:07.846 [2024-10-29 10:55:13.132352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70357 ] 00:06:07.846 [2024-10-29 10:55:13.282782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.846 [2024-10-29 10:55:13.308397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.105 [2024-10-29 10:55:13.353066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:08.105 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:08.105 [2024-10-29 10:55:13.554255] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:08.105 [2024-10-29 10:55:13.554352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70368 ] 00:06:08.365 [2024-10-29 10:55:13.698647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.365 [2024-10-29 10:55:13.721118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.365 [2024-10-29 10:55:13.721211] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:08.365 [2024-10-29 10:55:13.721224] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:08.365 [2024-10-29 10:55:13.721231] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70357 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 70357 ']' 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 70357 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70357 00:06:08.365 killing process with pid 70357 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70357' 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 70357 00:06:08.365 10:55:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 70357 00:06:08.624 ************************************ 00:06:08.624 END TEST exit_on_failed_rpc_init 00:06:08.624 ************************************ 00:06:08.624 00:06:08.624 real 0m0.951s 00:06:08.624 user 0m1.073s 00:06:08.624 sys 0m0.279s 00:06:08.624 10:55:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.624 10:55:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.624 10:55:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:08.624 00:06:08.624 real 0m12.829s 00:06:08.624 user 0m12.203s 00:06:08.624 sys 0m1.108s 00:06:08.624 10:55:14 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.624 10:55:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.624 ************************************ 00:06:08.624 END TEST skip_rpc 00:06:08.624 ************************************ 00:06:08.624 10:55:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:08.624 10:55:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.624 10:55:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.624 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:06:08.624 ************************************ 00:06:08.624 START TEST rpc_client 00:06:08.624 ************************************ 00:06:08.624 10:55:14 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:08.883 * Looking for test storage... 00:06:08.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:08.883 10:55:14 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.883 10:55:14 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.883 10:55:14 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.883 10:55:14 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:08.883 10:55:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.884 10:55:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:08.884 10:55:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:08.884 10:55:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.884 10:55:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:08.884 10:55:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.884 10:55:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.884 10:55:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.884 10:55:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:08.884 10:55:14 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.884 10:55:14 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.884 --rc genhtml_branch_coverage=1 00:06:08.884 --rc genhtml_function_coverage=1 00:06:08.884 --rc genhtml_legend=1 00:06:08.884 --rc geninfo_all_blocks=1 00:06:08.884 --rc geninfo_unexecuted_blocks=1 00:06:08.884 00:06:08.884 ' 00:06:08.884 10:55:14 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.884 --rc genhtml_branch_coverage=1 00:06:08.884 --rc genhtml_function_coverage=1 00:06:08.884 --rc genhtml_legend=1 00:06:08.884 --rc geninfo_all_blocks=1 00:06:08.884 --rc geninfo_unexecuted_blocks=1 00:06:08.884 00:06:08.884 ' 00:06:08.884 10:55:14 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.884 --rc genhtml_branch_coverage=1 00:06:08.884 --rc genhtml_function_coverage=1 00:06:08.884 --rc genhtml_legend=1 00:06:08.884 --rc geninfo_all_blocks=1 00:06:08.884 --rc geninfo_unexecuted_blocks=1 00:06:08.884 00:06:08.884 ' 00:06:08.884 10:55:14 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.884 --rc genhtml_branch_coverage=1 00:06:08.884 --rc genhtml_function_coverage=1 00:06:08.884 --rc genhtml_legend=1 00:06:08.884 --rc geninfo_all_blocks=1 00:06:08.884 --rc geninfo_unexecuted_blocks=1 00:06:08.884 00:06:08.884 ' 00:06:08.884 10:55:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:08.884 OK 00:06:08.884 10:55:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:08.884 00:06:08.884 real 0m0.198s 00:06:08.884 user 0m0.126s 00:06:08.884 sys 0m0.083s 00:06:08.884 10:55:14 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.884 10:55:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:08.884 ************************************ 00:06:08.884 END TEST rpc_client 00:06:08.884 ************************************ 00:06:08.884 10:55:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:08.884 10:55:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.884 10:55:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.884 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:06:08.884 ************************************ 00:06:08.884 START TEST json_config 00:06:08.884 ************************************ 00:06:08.884 10:55:14 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:09.144 10:55:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.144 10:55:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.144 10:55:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.144 10:55:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.144 10:55:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.144 10:55:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.144 10:55:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.144 10:55:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.144 10:55:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.144 10:55:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.144 10:55:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.144 10:55:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:09.144 10:55:14 json_config -- scripts/common.sh@345 -- # : 1 00:06:09.144 10:55:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.144 10:55:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.144 10:55:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:09.144 10:55:14 json_config -- scripts/common.sh@353 -- # local d=1 00:06:09.144 10:55:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.144 10:55:14 json_config -- scripts/common.sh@355 -- # echo 1 00:06:09.144 10:55:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.144 10:55:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:09.144 10:55:14 json_config -- scripts/common.sh@353 -- # local d=2 00:06:09.144 10:55:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.144 10:55:14 json_config -- scripts/common.sh@355 -- # echo 2 00:06:09.144 10:55:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.144 10:55:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.144 10:55:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.144 10:55:14 json_config -- scripts/common.sh@368 -- # return 0 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:09.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.144 --rc genhtml_branch_coverage=1 00:06:09.144 --rc genhtml_function_coverage=1 00:06:09.144 --rc genhtml_legend=1 00:06:09.144 --rc geninfo_all_blocks=1 00:06:09.144 --rc geninfo_unexecuted_blocks=1 00:06:09.144 00:06:09.144 ' 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:09.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.144 --rc genhtml_branch_coverage=1 00:06:09.144 --rc genhtml_function_coverage=1 00:06:09.144 --rc genhtml_legend=1 00:06:09.144 --rc geninfo_all_blocks=1 00:06:09.144 --rc geninfo_unexecuted_blocks=1 00:06:09.144 00:06:09.144 ' 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:09.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.144 --rc genhtml_branch_coverage=1 00:06:09.144 --rc genhtml_function_coverage=1 00:06:09.144 --rc genhtml_legend=1 00:06:09.144 --rc geninfo_all_blocks=1 00:06:09.144 --rc geninfo_unexecuted_blocks=1 00:06:09.144 00:06:09.144 ' 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:09.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.144 --rc genhtml_branch_coverage=1 00:06:09.144 --rc genhtml_function_coverage=1 00:06:09.144 --rc genhtml_legend=1 00:06:09.144 --rc geninfo_all_blocks=1 00:06:09.144 --rc geninfo_unexecuted_blocks=1 00:06:09.144 00:06:09.144 ' 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.144 10:55:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.144 10:55:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.144 10:55:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.144 10:55:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.144 10:55:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.144 10:55:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.144 10:55:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.144 10:55:14 json_config -- paths/export.sh@5 -- # export PATH 00:06:09.144 10:55:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@51 -- # : 0 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:09.144 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:09.144 10:55:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:09.144 INFO: JSON configuration test init 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.144 10:55:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.144 10:55:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:09.145 10:55:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.145 10:55:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.145 10:55:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:09.145 10:55:14 json_config -- json_config/common.sh@9 -- # local app=target 00:06:09.145 10:55:14 json_config -- json_config/common.sh@10 -- # shift 00:06:09.145 10:55:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:09.145 10:55:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:09.145 10:55:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:09.145 10:55:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:09.145 10:55:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:09.145 10:55:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=70502 00:06:09.145 10:55:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:09.145 Waiting for target to run... 00:06:09.145 10:55:14 json_config -- json_config/common.sh@25 -- # waitforlisten 70502 /var/tmp/spdk_tgt.sock 00:06:09.145 10:55:14 json_config -- common/autotest_common.sh@833 -- # '[' -z 70502 ']' 00:06:09.145 10:55:14 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:09.145 10:55:14 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:09.145 10:55:14 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:09.145 10:55:14 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:09.145 10:55:14 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.145 10:55:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:09.145 [2024-10-29 10:55:14.618246] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:09.145 [2024-10-29 10:55:14.618346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70502 ] 00:06:09.713 [2024-10-29 10:55:14.915340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.713 [2024-10-29 10:55:14.928899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.281 10:55:15 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.281 10:55:15 json_config -- common/autotest_common.sh@866 -- # return 0 00:06:10.281 00:06:10.281 10:55:15 json_config -- json_config/common.sh@26 -- # echo '' 00:06:10.281 10:55:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:10.281 10:55:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:10.281 10:55:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.281 10:55:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.281 10:55:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:10.281 10:55:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:10.281 10:55:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.281 10:55:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.281 10:55:15 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:10.281 10:55:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:10.281 10:55:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:10.542 [2024-10-29 10:55:15.936613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.801 10:55:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:10.801 10:55:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:10.801 10:55:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:10.801 10:55:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.801 10:55:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:10.801 10:55:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:10.801 10:55:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:10.801 10:55:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:10.801 10:55:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:10.801 10:55:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:10.801 10:55:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:10.801 10:55:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@54 -- # sort 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:11.061 10:55:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:11.061 10:55:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:11.061 10:55:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:11.061 10:55:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:11.061 10:55:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:11.061 10:55:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:11.319 MallocForNvmf0 00:06:11.319 10:55:16 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:11.319 10:55:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:11.578 MallocForNvmf1 00:06:11.578 10:55:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:11.578 10:55:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:11.838 [2024-10-29 10:55:17.263781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.838 10:55:17 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:11.838 10:55:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:12.096 10:55:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:12.096 10:55:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:12.355 10:55:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:12.355 10:55:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:12.614 10:55:18 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:12.614 10:55:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:12.873 [2024-10-29 10:55:18.312468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:12.873 10:55:18 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:12.873 10:55:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:12.873 10:55:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:12.873 10:55:18 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:12.873 10:55:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:12.873 10:55:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.131 10:55:18 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:13.131 10:55:18 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:13.131 10:55:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:13.389 MallocBdevForConfigChangeCheck 00:06:13.389 10:55:18 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:13.389 10:55:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.389 10:55:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.389 10:55:18 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:13.389 10:55:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.957 INFO: shutting down applications... 00:06:13.957 10:55:19 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:13.957 10:55:19 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:13.957 10:55:19 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:13.957 10:55:19 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:13.957 10:55:19 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:14.216 Calling clear_iscsi_subsystem 00:06:14.216 Calling clear_nvmf_subsystem 00:06:14.216 Calling clear_nbd_subsystem 00:06:14.216 Calling clear_ublk_subsystem 00:06:14.216 Calling clear_vhost_blk_subsystem 00:06:14.216 Calling clear_vhost_scsi_subsystem 00:06:14.216 Calling clear_bdev_subsystem 00:06:14.216 10:55:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:14.216 10:55:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:14.216 10:55:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:14.216 10:55:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:14.216 10:55:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.216 10:55:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:14.474 10:55:19 json_config -- json_config/json_config.sh@352 -- # break 00:06:14.474 10:55:19 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:14.474 10:55:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:14.474 10:55:19 json_config -- json_config/common.sh@31 -- # local app=target 00:06:14.474 10:55:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:14.474 10:55:19 json_config -- json_config/common.sh@35 -- # [[ -n 70502 ]] 00:06:14.474 10:55:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 70502 00:06:14.474 10:55:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:14.474 10:55:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.474 10:55:19 json_config -- json_config/common.sh@41 -- # kill -0 70502 00:06:14.474 10:55:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.043 10:55:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.043 10:55:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.043 10:55:20 json_config -- json_config/common.sh@41 -- # kill -0 70502 00:06:15.043 10:55:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.043 10:55:20 json_config -- json_config/common.sh@43 -- # break 00:06:15.043 10:55:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.043 SPDK target shutdown done 00:06:15.043 10:55:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.043 INFO: relaunching applications... 00:06:15.043 10:55:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:15.043 10:55:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:15.043 10:55:20 json_config -- json_config/common.sh@9 -- # local app=target 00:06:15.043 10:55:20 json_config -- json_config/common.sh@10 -- # shift 00:06:15.043 10:55:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:15.043 10:55:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:15.043 10:55:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:15.043 10:55:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.043 10:55:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:15.043 10:55:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=70703 00:06:15.043 Waiting for target to run... 00:06:15.043 10:55:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:15.043 10:55:20 json_config -- json_config/common.sh@25 -- # waitforlisten 70703 /var/tmp/spdk_tgt.sock 00:06:15.043 10:55:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:15.043 10:55:20 json_config -- common/autotest_common.sh@833 -- # '[' -z 70703 ']' 00:06:15.043 10:55:20 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:15.043 10:55:20 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:15.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:15.043 10:55:20 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:15.043 10:55:20 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:15.043 10:55:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.043 [2024-10-29 10:55:20.507511] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:15.043 [2024-10-29 10:55:20.507617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70703 ] 00:06:15.611 [2024-10-29 10:55:20.804997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.611 [2024-10-29 10:55:20.817301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.611 [2024-10-29 10:55:20.945128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.871 [2024-10-29 10:55:21.133590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.871 [2024-10-29 10:55:21.165653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:16.131 10:55:21 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:16.131 10:55:21 json_config -- common/autotest_common.sh@866 -- # return 0 00:06:16.131 00:06:16.131 10:55:21 json_config -- json_config/common.sh@26 -- # echo '' 00:06:16.131 10:55:21 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:16.131 INFO: Checking if target configuration is the same... 00:06:16.131 10:55:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:16.131 10:55:21 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:16.131 10:55:21 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:16.131 10:55:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:16.131 + '[' 2 -ne 2 ']' 00:06:16.131 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:16.131 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:16.131 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:16.131 +++ basename /dev/fd/62 00:06:16.131 ++ mktemp /tmp/62.XXX 00:06:16.131 + tmp_file_1=/tmp/62.Wrj 00:06:16.131 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:16.131 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:16.131 + tmp_file_2=/tmp/spdk_tgt_config.json.9mf 00:06:16.131 + ret=0 00:06:16.131 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:16.390 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:16.650 + diff -u /tmp/62.Wrj /tmp/spdk_tgt_config.json.9mf 00:06:16.650 INFO: JSON config files are the same 00:06:16.650 + echo 'INFO: JSON config files are the same' 00:06:16.650 + rm /tmp/62.Wrj /tmp/spdk_tgt_config.json.9mf 00:06:16.650 + exit 0 00:06:16.650 10:55:21 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:16.650 INFO: changing configuration and checking if this can be detected... 00:06:16.650 10:55:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:16.650 10:55:21 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:16.650 10:55:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:16.908 10:55:22 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:16.908 10:55:22 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:16.909 10:55:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:16.909 + '[' 2 -ne 2 ']' 00:06:16.909 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:16.909 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:16.909 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:16.909 +++ basename /dev/fd/62 00:06:16.909 ++ mktemp /tmp/62.XXX 00:06:16.909 + tmp_file_1=/tmp/62.MAJ 00:06:16.909 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:16.909 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:16.909 + tmp_file_2=/tmp/spdk_tgt_config.json.fwj 00:06:16.909 + ret=0 00:06:16.909 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:17.167 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:17.426 + diff -u /tmp/62.MAJ /tmp/spdk_tgt_config.json.fwj 00:06:17.426 + ret=1 00:06:17.426 + echo '=== Start of file: /tmp/62.MAJ ===' 00:06:17.426 + cat /tmp/62.MAJ 00:06:17.426 + echo '=== End of file: /tmp/62.MAJ ===' 00:06:17.426 + echo '' 00:06:17.426 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fwj ===' 00:06:17.426 + cat /tmp/spdk_tgt_config.json.fwj 00:06:17.426 + echo '=== End of file: /tmp/spdk_tgt_config.json.fwj ===' 00:06:17.426 + echo '' 00:06:17.426 + rm /tmp/62.MAJ /tmp/spdk_tgt_config.json.fwj 00:06:17.426 + exit 1 00:06:17.426 INFO: configuration change detected. 00:06:17.426 10:55:22 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:17.426 10:55:22 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:17.426 10:55:22 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:17.426 10:55:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.426 10:55:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.426 10:55:22 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:17.426 10:55:22 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@324 -- # [[ -n 70703 ]] 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@330 -- # killprocess 70703 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@952 -- # '[' -z 70703 ']' 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@956 -- # kill -0 70703 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@957 -- # uname 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70703 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:17.427 killing process with pid 70703 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70703' 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@971 -- # kill 70703 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@976 -- # wait 70703 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.427 10:55:22 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.427 10:55:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.686 10:55:22 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:17.686 INFO: Success 00:06:17.686 10:55:22 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:17.686 00:06:17.686 real 0m8.596s 00:06:17.686 user 0m12.584s 00:06:17.686 sys 0m1.418s 00:06:17.686 10:55:22 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.686 10:55:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.686 ************************************ 00:06:17.686 END TEST json_config 00:06:17.686 ************************************ 00:06:17.686 10:55:23 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:17.686 10:55:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.686 10:55:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.686 10:55:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.686 ************************************ 00:06:17.686 START TEST json_config_extra_key 00:06:17.686 ************************************ 00:06:17.686 10:55:23 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:17.686 10:55:23 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:17.686 10:55:23 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:17.686 10:55:23 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:17.686 10:55:23 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.686 10:55:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:17.686 10:55:23 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.686 10:55:23 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:17.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.687 --rc genhtml_branch_coverage=1 00:06:17.687 --rc genhtml_function_coverage=1 00:06:17.687 --rc genhtml_legend=1 00:06:17.687 --rc geninfo_all_blocks=1 00:06:17.687 --rc geninfo_unexecuted_blocks=1 00:06:17.687 00:06:17.687 ' 00:06:17.687 10:55:23 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:17.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.687 --rc genhtml_branch_coverage=1 00:06:17.687 --rc genhtml_function_coverage=1 00:06:17.687 --rc genhtml_legend=1 00:06:17.687 --rc geninfo_all_blocks=1 00:06:17.687 --rc geninfo_unexecuted_blocks=1 00:06:17.687 00:06:17.687 ' 00:06:17.687 10:55:23 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:17.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.687 --rc genhtml_branch_coverage=1 00:06:17.687 --rc genhtml_function_coverage=1 00:06:17.687 --rc genhtml_legend=1 00:06:17.687 --rc geninfo_all_blocks=1 00:06:17.687 --rc geninfo_unexecuted_blocks=1 00:06:17.687 00:06:17.687 ' 00:06:17.687 10:55:23 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:17.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.687 --rc genhtml_branch_coverage=1 00:06:17.687 --rc genhtml_function_coverage=1 00:06:17.687 --rc genhtml_legend=1 00:06:17.687 --rc geninfo_all_blocks=1 00:06:17.687 --rc geninfo_unexecuted_blocks=1 00:06:17.687 00:06:17.687 ' 00:06:17.687 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.687 10:55:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:17.946 10:55:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:17.946 10:55:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.946 10:55:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.946 10:55:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.946 10:55:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.946 10:55:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.946 10:55:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.946 10:55:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:17.946 10:55:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:17.946 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:17.946 10:55:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:17.946 INFO: launching applications... 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:17.946 10:55:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:17.946 10:55:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:17.946 10:55:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:17.946 10:55:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:17.946 10:55:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:17.946 10:55:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:17.946 10:55:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.947 10:55:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.947 10:55:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70851 00:06:17.947 Waiting for target to run... 00:06:17.947 10:55:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:17.947 10:55:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70851 /var/tmp/spdk_tgt.sock 00:06:17.947 10:55:23 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 70851 ']' 00:06:17.947 10:55:23 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:17.947 10:55:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:17.947 10:55:23 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:17.947 10:55:23 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:17.947 10:55:23 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.947 10:55:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.947 [2024-10-29 10:55:23.276166] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:17.947 [2024-10-29 10:55:23.276276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70851 ] 00:06:18.206 [2024-10-29 10:55:23.598743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.206 [2024-10-29 10:55:23.615420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.206 [2024-10-29 10:55:23.640654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.143 10:55:24 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.143 10:55:24 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:19.143 00:06:19.143 10:55:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:19.143 INFO: shutting down applications... 00:06:19.143 10:55:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:19.143 10:55:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:19.143 10:55:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:19.143 10:55:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.143 10:55:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70851 ]] 00:06:19.143 10:55:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70851 00:06:19.143 10:55:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.143 10:55:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.143 10:55:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70851 00:06:19.143 10:55:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.402 10:55:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.402 10:55:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.402 10:55:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70851 00:06:19.402 10:55:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:19.402 10:55:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:19.402 10:55:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:19.402 SPDK target shutdown done 00:06:19.402 10:55:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:19.402 Success 00:06:19.402 10:55:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:19.402 00:06:19.402 real 0m1.800s 00:06:19.402 user 0m1.674s 00:06:19.402 sys 0m0.330s 00:06:19.402 10:55:24 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.402 10:55:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:19.402 ************************************ 00:06:19.402 END TEST json_config_extra_key 00:06:19.402 ************************************ 00:06:19.402 10:55:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.402 10:55:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:19.402 10:55:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.402 10:55:24 -- common/autotest_common.sh@10 -- # set +x 00:06:19.402 ************************************ 00:06:19.402 START TEST alias_rpc 00:06:19.402 ************************************ 00:06:19.402 10:55:24 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.661 * Looking for test storage... 00:06:19.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:19.661 10:55:24 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:19.661 10:55:24 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:19.661 10:55:24 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:19.661 10:55:25 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.661 10:55:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.662 10:55:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:19.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.662 --rc genhtml_branch_coverage=1 00:06:19.662 --rc genhtml_function_coverage=1 00:06:19.662 --rc genhtml_legend=1 00:06:19.662 --rc geninfo_all_blocks=1 00:06:19.662 --rc geninfo_unexecuted_blocks=1 00:06:19.662 00:06:19.662 ' 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:19.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.662 --rc genhtml_branch_coverage=1 00:06:19.662 --rc genhtml_function_coverage=1 00:06:19.662 --rc genhtml_legend=1 00:06:19.662 --rc geninfo_all_blocks=1 00:06:19.662 --rc geninfo_unexecuted_blocks=1 00:06:19.662 00:06:19.662 ' 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:19.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.662 --rc genhtml_branch_coverage=1 00:06:19.662 --rc genhtml_function_coverage=1 00:06:19.662 --rc genhtml_legend=1 00:06:19.662 --rc geninfo_all_blocks=1 00:06:19.662 --rc geninfo_unexecuted_blocks=1 00:06:19.662 00:06:19.662 ' 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:19.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.662 --rc genhtml_branch_coverage=1 00:06:19.662 --rc genhtml_function_coverage=1 00:06:19.662 --rc genhtml_legend=1 00:06:19.662 --rc geninfo_all_blocks=1 00:06:19.662 --rc geninfo_unexecuted_blocks=1 00:06:19.662 00:06:19.662 ' 00:06:19.662 10:55:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:19.662 10:55:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70928 00:06:19.662 10:55:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70928 00:06:19.662 10:55:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 70928 ']' 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.662 10:55:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.662 [2024-10-29 10:55:25.119290] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:19.662 [2024-10-29 10:55:25.119396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70928 ] 00:06:19.922 [2024-10-29 10:55:25.264705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.922 [2024-10-29 10:55:25.283883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.922 [2024-10-29 10:55:25.319491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.181 10:55:25 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:20.181 10:55:25 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:20.181 10:55:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:20.439 10:55:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70928 00:06:20.439 10:55:25 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 70928 ']' 00:06:20.439 10:55:25 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 70928 00:06:20.439 10:55:25 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:20.439 10:55:25 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:20.439 10:55:25 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70928 00:06:20.439 10:55:25 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:20.439 10:55:25 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:20.439 killing process with pid 70928 00:06:20.439 10:55:25 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70928' 00:06:20.440 10:55:25 alias_rpc -- common/autotest_common.sh@971 -- # kill 70928 00:06:20.440 10:55:25 alias_rpc -- common/autotest_common.sh@976 -- # wait 70928 00:06:20.699 00:06:20.699 real 0m1.153s 00:06:20.699 user 0m1.350s 00:06:20.699 sys 0m0.314s 00:06:20.699 10:55:26 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.699 10:55:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.699 ************************************ 00:06:20.699 END TEST alias_rpc 00:06:20.699 ************************************ 00:06:20.699 10:55:26 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:20.699 10:55:26 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:20.699 10:55:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.699 10:55:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.699 10:55:26 -- common/autotest_common.sh@10 -- # set +x 00:06:20.699 ************************************ 00:06:20.699 START TEST spdkcli_tcp 00:06:20.699 ************************************ 00:06:20.699 10:55:26 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:20.699 * Looking for test storage... 00:06:20.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:20.699 10:55:26 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:20.699 10:55:26 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:20.699 10:55:26 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:20.958 10:55:26 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.958 10:55:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:20.958 10:55:26 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.958 10:55:26 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:20.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.958 --rc genhtml_branch_coverage=1 00:06:20.958 --rc genhtml_function_coverage=1 00:06:20.958 --rc genhtml_legend=1 00:06:20.958 --rc geninfo_all_blocks=1 00:06:20.958 --rc geninfo_unexecuted_blocks=1 00:06:20.958 00:06:20.958 ' 00:06:20.958 10:55:26 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:20.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.958 --rc genhtml_branch_coverage=1 00:06:20.958 --rc genhtml_function_coverage=1 00:06:20.958 --rc genhtml_legend=1 00:06:20.958 --rc geninfo_all_blocks=1 00:06:20.958 --rc geninfo_unexecuted_blocks=1 00:06:20.958 00:06:20.958 ' 00:06:20.958 10:55:26 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:20.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.958 --rc genhtml_branch_coverage=1 00:06:20.958 --rc genhtml_function_coverage=1 00:06:20.958 --rc genhtml_legend=1 00:06:20.958 --rc geninfo_all_blocks=1 00:06:20.958 --rc geninfo_unexecuted_blocks=1 00:06:20.958 00:06:20.958 ' 00:06:20.959 10:55:26 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:20.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.959 --rc genhtml_branch_coverage=1 00:06:20.959 --rc genhtml_function_coverage=1 00:06:20.959 --rc genhtml_legend=1 00:06:20.959 --rc geninfo_all_blocks=1 00:06:20.959 --rc geninfo_unexecuted_blocks=1 00:06:20.959 00:06:20.959 ' 00:06:20.959 10:55:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:20.959 10:55:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:20.959 10:55:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:20.959 10:55:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:20.959 10:55:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:20.959 10:55:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:20.959 10:55:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:20.959 10:55:26 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.959 10:55:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.959 10:55:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71000 00:06:20.959 10:55:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71000 00:06:20.959 10:55:26 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 71000 ']' 00:06:20.959 10:55:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:20.959 10:55:26 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.959 10:55:26 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:20.959 10:55:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.959 10:55:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:20.959 10:55:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.959 [2024-10-29 10:55:26.321825] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:20.959 [2024-10-29 10:55:26.321922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71000 ] 00:06:21.218 [2024-10-29 10:55:26.469189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.218 [2024-10-29 10:55:26.490932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.218 [2024-10-29 10:55:26.490939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.218 [2024-10-29 10:55:26.526995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.787 10:55:27 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.787 10:55:27 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:21.787 10:55:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:21.787 10:55:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71017 00:06:21.787 10:55:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:22.046 [ 00:06:22.046 "bdev_malloc_delete", 00:06:22.046 "bdev_malloc_create", 00:06:22.046 "bdev_null_resize", 00:06:22.046 "bdev_null_delete", 00:06:22.046 "bdev_null_create", 00:06:22.046 "bdev_nvme_cuse_unregister", 00:06:22.046 "bdev_nvme_cuse_register", 00:06:22.046 "bdev_opal_new_user", 00:06:22.046 "bdev_opal_set_lock_state", 00:06:22.046 "bdev_opal_delete", 00:06:22.046 "bdev_opal_get_info", 00:06:22.046 "bdev_opal_create", 00:06:22.046 "bdev_nvme_opal_revert", 00:06:22.046 "bdev_nvme_opal_init", 00:06:22.046 "bdev_nvme_send_cmd", 00:06:22.046 "bdev_nvme_set_keys", 00:06:22.046 "bdev_nvme_get_path_iostat", 00:06:22.046 "bdev_nvme_get_mdns_discovery_info", 00:06:22.046 "bdev_nvme_stop_mdns_discovery", 00:06:22.046 "bdev_nvme_start_mdns_discovery", 00:06:22.046 "bdev_nvme_set_multipath_policy", 00:06:22.046 "bdev_nvme_set_preferred_path", 00:06:22.046 "bdev_nvme_get_io_paths", 00:06:22.046 "bdev_nvme_remove_error_injection", 00:06:22.046 "bdev_nvme_add_error_injection", 00:06:22.046 "bdev_nvme_get_discovery_info", 00:06:22.046 "bdev_nvme_stop_discovery", 00:06:22.046 "bdev_nvme_start_discovery", 00:06:22.046 "bdev_nvme_get_controller_health_info", 00:06:22.046 "bdev_nvme_disable_controller", 00:06:22.046 "bdev_nvme_enable_controller", 00:06:22.046 "bdev_nvme_reset_controller", 00:06:22.046 "bdev_nvme_get_transport_statistics", 00:06:22.046 "bdev_nvme_apply_firmware", 00:06:22.046 "bdev_nvme_detach_controller", 00:06:22.046 "bdev_nvme_get_controllers", 00:06:22.046 "bdev_nvme_attach_controller", 00:06:22.046 "bdev_nvme_set_hotplug", 00:06:22.046 "bdev_nvme_set_options", 00:06:22.046 "bdev_passthru_delete", 00:06:22.046 "bdev_passthru_create", 00:06:22.046 "bdev_lvol_set_parent_bdev", 00:06:22.046 "bdev_lvol_set_parent", 00:06:22.046 "bdev_lvol_check_shallow_copy", 00:06:22.046 "bdev_lvol_start_shallow_copy", 00:06:22.046 "bdev_lvol_grow_lvstore", 00:06:22.046 "bdev_lvol_get_lvols", 00:06:22.046 "bdev_lvol_get_lvstores", 00:06:22.046 "bdev_lvol_delete", 00:06:22.046 "bdev_lvol_set_read_only", 00:06:22.046 "bdev_lvol_resize", 00:06:22.046 "bdev_lvol_decouple_parent", 00:06:22.046 "bdev_lvol_inflate", 00:06:22.046 "bdev_lvol_rename", 00:06:22.046 "bdev_lvol_clone_bdev", 00:06:22.046 "bdev_lvol_clone", 00:06:22.046 "bdev_lvol_snapshot", 00:06:22.046 "bdev_lvol_create", 00:06:22.046 "bdev_lvol_delete_lvstore", 00:06:22.046 "bdev_lvol_rename_lvstore", 00:06:22.046 "bdev_lvol_create_lvstore", 00:06:22.046 "bdev_raid_set_options", 00:06:22.046 "bdev_raid_remove_base_bdev", 00:06:22.046 "bdev_raid_add_base_bdev", 00:06:22.046 "bdev_raid_delete", 00:06:22.046 "bdev_raid_create", 00:06:22.046 "bdev_raid_get_bdevs", 00:06:22.046 "bdev_error_inject_error", 00:06:22.046 "bdev_error_delete", 00:06:22.046 "bdev_error_create", 00:06:22.046 "bdev_split_delete", 00:06:22.046 "bdev_split_create", 00:06:22.046 "bdev_delay_delete", 00:06:22.046 "bdev_delay_create", 00:06:22.046 "bdev_delay_update_latency", 00:06:22.046 "bdev_zone_block_delete", 00:06:22.046 "bdev_zone_block_create", 00:06:22.046 "blobfs_create", 00:06:22.046 "blobfs_detect", 00:06:22.046 "blobfs_set_cache_size", 00:06:22.046 "bdev_aio_delete", 00:06:22.046 "bdev_aio_rescan", 00:06:22.046 "bdev_aio_create", 00:06:22.046 "bdev_ftl_set_property", 00:06:22.046 "bdev_ftl_get_properties", 00:06:22.046 "bdev_ftl_get_stats", 00:06:22.046 "bdev_ftl_unmap", 00:06:22.046 "bdev_ftl_unload", 00:06:22.046 "bdev_ftl_delete", 00:06:22.046 "bdev_ftl_load", 00:06:22.046 "bdev_ftl_create", 00:06:22.046 "bdev_virtio_attach_controller", 00:06:22.046 "bdev_virtio_scsi_get_devices", 00:06:22.046 "bdev_virtio_detach_controller", 00:06:22.046 "bdev_virtio_blk_set_hotplug", 00:06:22.046 "bdev_iscsi_delete", 00:06:22.046 "bdev_iscsi_create", 00:06:22.046 "bdev_iscsi_set_options", 00:06:22.046 "bdev_uring_delete", 00:06:22.046 "bdev_uring_rescan", 00:06:22.046 "bdev_uring_create", 00:06:22.046 "accel_error_inject_error", 00:06:22.046 "ioat_scan_accel_module", 00:06:22.046 "dsa_scan_accel_module", 00:06:22.046 "iaa_scan_accel_module", 00:06:22.047 "vfu_virtio_create_fs_endpoint", 00:06:22.047 "vfu_virtio_create_scsi_endpoint", 00:06:22.047 "vfu_virtio_scsi_remove_target", 00:06:22.047 "vfu_virtio_scsi_add_target", 00:06:22.047 "vfu_virtio_create_blk_endpoint", 00:06:22.047 "vfu_virtio_delete_endpoint", 00:06:22.047 "keyring_file_remove_key", 00:06:22.047 "keyring_file_add_key", 00:06:22.047 "keyring_linux_set_options", 00:06:22.047 "fsdev_aio_delete", 00:06:22.047 "fsdev_aio_create", 00:06:22.047 "iscsi_get_histogram", 00:06:22.047 "iscsi_enable_histogram", 00:06:22.047 "iscsi_set_options", 00:06:22.047 "iscsi_get_auth_groups", 00:06:22.047 "iscsi_auth_group_remove_secret", 00:06:22.047 "iscsi_auth_group_add_secret", 00:06:22.047 "iscsi_delete_auth_group", 00:06:22.047 "iscsi_create_auth_group", 00:06:22.047 "iscsi_set_discovery_auth", 00:06:22.047 "iscsi_get_options", 00:06:22.047 "iscsi_target_node_request_logout", 00:06:22.047 "iscsi_target_node_set_redirect", 00:06:22.047 "iscsi_target_node_set_auth", 00:06:22.047 "iscsi_target_node_add_lun", 00:06:22.047 "iscsi_get_stats", 00:06:22.047 "iscsi_get_connections", 00:06:22.047 "iscsi_portal_group_set_auth", 00:06:22.047 "iscsi_start_portal_group", 00:06:22.047 "iscsi_delete_portal_group", 00:06:22.047 "iscsi_create_portal_group", 00:06:22.047 "iscsi_get_portal_groups", 00:06:22.047 "iscsi_delete_target_node", 00:06:22.047 "iscsi_target_node_remove_pg_ig_maps", 00:06:22.047 "iscsi_target_node_add_pg_ig_maps", 00:06:22.047 "iscsi_create_target_node", 00:06:22.047 "iscsi_get_target_nodes", 00:06:22.047 "iscsi_delete_initiator_group", 00:06:22.047 "iscsi_initiator_group_remove_initiators", 00:06:22.047 "iscsi_initiator_group_add_initiators", 00:06:22.047 "iscsi_create_initiator_group", 00:06:22.047 "iscsi_get_initiator_groups", 00:06:22.047 "nvmf_set_crdt", 00:06:22.047 "nvmf_set_config", 00:06:22.047 "nvmf_set_max_subsystems", 00:06:22.047 "nvmf_stop_mdns_prr", 00:06:22.047 "nvmf_publish_mdns_prr", 00:06:22.047 "nvmf_subsystem_get_listeners", 00:06:22.047 "nvmf_subsystem_get_qpairs", 00:06:22.047 "nvmf_subsystem_get_controllers", 00:06:22.047 "nvmf_get_stats", 00:06:22.047 "nvmf_get_transports", 00:06:22.047 "nvmf_create_transport", 00:06:22.047 "nvmf_get_targets", 00:06:22.047 "nvmf_delete_target", 00:06:22.047 "nvmf_create_target", 00:06:22.047 "nvmf_subsystem_allow_any_host", 00:06:22.047 "nvmf_subsystem_set_keys", 00:06:22.047 "nvmf_subsystem_remove_host", 00:06:22.047 "nvmf_subsystem_add_host", 00:06:22.047 "nvmf_ns_remove_host", 00:06:22.047 "nvmf_ns_add_host", 00:06:22.047 "nvmf_subsystem_remove_ns", 00:06:22.047 "nvmf_subsystem_set_ns_ana_group", 00:06:22.047 "nvmf_subsystem_add_ns", 00:06:22.047 "nvmf_subsystem_listener_set_ana_state", 00:06:22.047 "nvmf_discovery_get_referrals", 00:06:22.047 "nvmf_discovery_remove_referral", 00:06:22.047 "nvmf_discovery_add_referral", 00:06:22.047 "nvmf_subsystem_remove_listener", 00:06:22.047 "nvmf_subsystem_add_listener", 00:06:22.047 "nvmf_delete_subsystem", 00:06:22.047 "nvmf_create_subsystem", 00:06:22.047 "nvmf_get_subsystems", 00:06:22.047 "env_dpdk_get_mem_stats", 00:06:22.047 "nbd_get_disks", 00:06:22.047 "nbd_stop_disk", 00:06:22.047 "nbd_start_disk", 00:06:22.047 "ublk_recover_disk", 00:06:22.047 "ublk_get_disks", 00:06:22.047 "ublk_stop_disk", 00:06:22.047 "ublk_start_disk", 00:06:22.047 "ublk_destroy_target", 00:06:22.047 "ublk_create_target", 00:06:22.047 "virtio_blk_create_transport", 00:06:22.047 "virtio_blk_get_transports", 00:06:22.047 "vhost_controller_set_coalescing", 00:06:22.047 "vhost_get_controllers", 00:06:22.047 "vhost_delete_controller", 00:06:22.047 "vhost_create_blk_controller", 00:06:22.047 "vhost_scsi_controller_remove_target", 00:06:22.047 "vhost_scsi_controller_add_target", 00:06:22.047 "vhost_start_scsi_controller", 00:06:22.047 "vhost_create_scsi_controller", 00:06:22.047 "thread_set_cpumask", 00:06:22.047 "scheduler_set_options", 00:06:22.047 "framework_get_governor", 00:06:22.047 "framework_get_scheduler", 00:06:22.047 "framework_set_scheduler", 00:06:22.047 "framework_get_reactors", 00:06:22.047 "thread_get_io_channels", 00:06:22.047 "thread_get_pollers", 00:06:22.047 "thread_get_stats", 00:06:22.047 "framework_monitor_context_switch", 00:06:22.047 "spdk_kill_instance", 00:06:22.047 "log_enable_timestamps", 00:06:22.047 "log_get_flags", 00:06:22.047 "log_clear_flag", 00:06:22.047 "log_set_flag", 00:06:22.047 "log_get_level", 00:06:22.047 "log_set_level", 00:06:22.047 "log_get_print_level", 00:06:22.047 "log_set_print_level", 00:06:22.047 "framework_enable_cpumask_locks", 00:06:22.047 "framework_disable_cpumask_locks", 00:06:22.047 "framework_wait_init", 00:06:22.047 "framework_start_init", 00:06:22.047 "scsi_get_devices", 00:06:22.047 "bdev_get_histogram", 00:06:22.047 "bdev_enable_histogram", 00:06:22.047 "bdev_set_qos_limit", 00:06:22.047 "bdev_set_qd_sampling_period", 00:06:22.047 "bdev_get_bdevs", 00:06:22.047 "bdev_reset_iostat", 00:06:22.047 "bdev_get_iostat", 00:06:22.047 "bdev_examine", 00:06:22.047 "bdev_wait_for_examine", 00:06:22.047 "bdev_set_options", 00:06:22.047 "accel_get_stats", 00:06:22.047 "accel_set_options", 00:06:22.047 "accel_set_driver", 00:06:22.047 "accel_crypto_key_destroy", 00:06:22.047 "accel_crypto_keys_get", 00:06:22.047 "accel_crypto_key_create", 00:06:22.047 "accel_assign_opc", 00:06:22.047 "accel_get_module_info", 00:06:22.047 "accel_get_opc_assignments", 00:06:22.047 "vmd_rescan", 00:06:22.047 "vmd_remove_device", 00:06:22.047 "vmd_enable", 00:06:22.047 "sock_get_default_impl", 00:06:22.047 "sock_set_default_impl", 00:06:22.047 "sock_impl_set_options", 00:06:22.047 "sock_impl_get_options", 00:06:22.047 "iobuf_get_stats", 00:06:22.047 "iobuf_set_options", 00:06:22.047 "keyring_get_keys", 00:06:22.047 "vfu_tgt_set_base_path", 00:06:22.047 "framework_get_pci_devices", 00:06:22.047 "framework_get_config", 00:06:22.047 "framework_get_subsystems", 00:06:22.047 "fsdev_set_opts", 00:06:22.047 "fsdev_get_opts", 00:06:22.047 "trace_get_info", 00:06:22.047 "trace_get_tpoint_group_mask", 00:06:22.047 "trace_disable_tpoint_group", 00:06:22.047 "trace_enable_tpoint_group", 00:06:22.047 "trace_clear_tpoint_mask", 00:06:22.047 "trace_set_tpoint_mask", 00:06:22.047 "notify_get_notifications", 00:06:22.047 "notify_get_types", 00:06:22.047 "spdk_get_version", 00:06:22.047 "rpc_get_methods" 00:06:22.047 ] 00:06:22.047 10:55:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:22.047 10:55:27 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.047 10:55:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.047 10:55:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:22.047 10:55:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71000 00:06:22.047 10:55:27 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 71000 ']' 00:06:22.047 10:55:27 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 71000 00:06:22.047 10:55:27 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:22.047 10:55:27 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.047 10:55:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71000 00:06:22.306 10:55:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.306 10:55:27 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.306 killing process with pid 71000 00:06:22.306 10:55:27 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71000' 00:06:22.306 10:55:27 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 71000 00:06:22.306 10:55:27 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 71000 00:06:22.306 00:06:22.306 real 0m1.704s 00:06:22.306 user 0m3.228s 00:06:22.306 sys 0m0.375s 00:06:22.306 10:55:27 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.306 10:55:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.306 ************************************ 00:06:22.306 END TEST spdkcli_tcp 00:06:22.306 ************************************ 00:06:22.566 10:55:27 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.566 10:55:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.566 10:55:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.566 10:55:27 -- common/autotest_common.sh@10 -- # set +x 00:06:22.566 ************************************ 00:06:22.566 START TEST dpdk_mem_utility 00:06:22.566 ************************************ 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.566 * Looking for test storage... 00:06:22.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.566 10:55:27 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:22.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.566 --rc genhtml_branch_coverage=1 00:06:22.566 --rc genhtml_function_coverage=1 00:06:22.566 --rc genhtml_legend=1 00:06:22.566 --rc geninfo_all_blocks=1 00:06:22.566 --rc geninfo_unexecuted_blocks=1 00:06:22.566 00:06:22.566 ' 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:22.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.566 --rc genhtml_branch_coverage=1 00:06:22.566 --rc genhtml_function_coverage=1 00:06:22.566 --rc genhtml_legend=1 00:06:22.566 --rc geninfo_all_blocks=1 00:06:22.566 --rc geninfo_unexecuted_blocks=1 00:06:22.566 00:06:22.566 ' 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:22.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.566 --rc genhtml_branch_coverage=1 00:06:22.566 --rc genhtml_function_coverage=1 00:06:22.566 --rc genhtml_legend=1 00:06:22.566 --rc geninfo_all_blocks=1 00:06:22.566 --rc geninfo_unexecuted_blocks=1 00:06:22.566 00:06:22.566 ' 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:22.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.566 --rc genhtml_branch_coverage=1 00:06:22.566 --rc genhtml_function_coverage=1 00:06:22.566 --rc genhtml_legend=1 00:06:22.566 --rc geninfo_all_blocks=1 00:06:22.566 --rc geninfo_unexecuted_blocks=1 00:06:22.566 00:06:22.566 ' 00:06:22.566 10:55:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:22.566 10:55:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71094 00:06:22.566 10:55:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:22.566 10:55:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71094 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 71094 ']' 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:22.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:22.566 10:55:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.566 [2024-10-29 10:55:28.030152] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:22.566 [2024-10-29 10:55:28.030235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71094 ] 00:06:22.825 [2024-10-29 10:55:28.173728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.825 [2024-10-29 10:55:28.196239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.825 [2024-10-29 10:55:28.232487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.086 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:23.086 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:23.086 10:55:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:23.086 10:55:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:23.086 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.086 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.086 { 00:06:23.086 "filename": "/tmp/spdk_mem_dump.txt" 00:06:23.086 } 00:06:23.086 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.086 10:55:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:23.086 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:23.086 1 heaps totaling size 810.000000 MiB 00:06:23.086 size: 810.000000 MiB heap id: 0 00:06:23.086 end heaps---------- 00:06:23.086 9 mempools totaling size 595.772034 MiB 00:06:23.086 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:23.086 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:23.086 size: 92.545471 MiB name: bdev_io_71094 00:06:23.086 size: 50.003479 MiB name: msgpool_71094 00:06:23.086 size: 36.509338 MiB name: fsdev_io_71094 00:06:23.086 size: 21.763794 MiB name: PDU_Pool 00:06:23.086 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:23.086 size: 4.133484 MiB name: evtpool_71094 00:06:23.086 size: 0.026123 MiB name: Session_Pool 00:06:23.086 end mempools------- 00:06:23.086 6 memzones totaling size 4.142822 MiB 00:06:23.086 size: 1.000366 MiB name: RG_ring_0_71094 00:06:23.086 size: 1.000366 MiB name: RG_ring_1_71094 00:06:23.086 size: 1.000366 MiB name: RG_ring_4_71094 00:06:23.086 size: 1.000366 MiB name: RG_ring_5_71094 00:06:23.086 size: 0.125366 MiB name: RG_ring_2_71094 00:06:23.086 size: 0.015991 MiB name: RG_ring_3_71094 00:06:23.086 end memzones------- 00:06:23.086 10:55:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:23.086 heap id: 0 total size: 810.000000 MiB number of busy elements: 310 number of free elements: 15 00:06:23.086 list of free elements. size: 10.813782 MiB 00:06:23.086 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:23.086 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:23.086 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:23.086 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:23.086 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:23.086 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:23.086 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:23.086 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:23.086 element at address: 0x20001a600000 with size: 0.568237 MiB 00:06:23.086 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:23.086 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:23.086 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:23.086 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:23.086 element at address: 0x200027a00000 with size: 0.395752 MiB 00:06:23.086 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:23.086 list of standard malloc elements. size: 199.267334 MiB 00:06:23.086 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:23.086 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:23.086 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:23.086 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:23.086 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:23.086 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:23.086 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:23.086 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:23.086 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:23.086 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:23.086 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:23.086 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:23.087 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:23.087 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:23.087 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:23.087 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:23.088 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:23.088 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:23.088 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:23.088 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a65500 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:23.088 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:23.088 list of memzone associated elements. size: 599.918884 MiB 00:06:23.088 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:23.088 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:23.088 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:23.088 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:23.088 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:23.088 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71094_0 00:06:23.088 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:23.088 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71094_0 00:06:23.088 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:23.088 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71094_0 00:06:23.088 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:23.088 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:23.088 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:23.088 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:23.088 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:23.088 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71094_0 00:06:23.088 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:23.088 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71094 00:06:23.088 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:23.088 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71094 00:06:23.088 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:23.088 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:23.088 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:23.088 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:23.088 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:23.088 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:23.088 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:23.088 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:23.088 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:23.088 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71094 00:06:23.088 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:23.088 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71094 00:06:23.088 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:23.088 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71094 00:06:23.088 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:23.088 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71094 00:06:23.088 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:23.088 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71094 00:06:23.088 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:23.088 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71094 00:06:23.088 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:23.088 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:23.088 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:23.088 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:23.088 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:23.088 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:23.088 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:23.088 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71094 00:06:23.088 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:23.088 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71094 00:06:23.088 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:23.088 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:23.088 element at address: 0x200027a65680 with size: 0.023743 MiB 00:06:23.088 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:23.088 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:23.088 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71094 00:06:23.088 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:06:23.088 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:23.088 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:23.088 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71094 00:06:23.088 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:23.088 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71094 00:06:23.088 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:23.088 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71094 00:06:23.088 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:06:23.088 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:23.089 10:55:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:23.089 10:55:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71094 00:06:23.089 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 71094 ']' 00:06:23.089 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 71094 00:06:23.089 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:23.089 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:23.089 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71094 00:06:23.089 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:23.089 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:23.089 killing process with pid 71094 00:06:23.089 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71094' 00:06:23.089 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 71094 00:06:23.089 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 71094 00:06:23.348 00:06:23.348 real 0m0.918s 00:06:23.348 user 0m0.997s 00:06:23.348 sys 0m0.293s 00:06:23.348 ************************************ 00:06:23.348 END TEST dpdk_mem_utility 00:06:23.348 ************************************ 00:06:23.348 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:23.348 10:55:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.348 10:55:28 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:23.348 10:55:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:23.348 10:55:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.348 10:55:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.348 ************************************ 00:06:23.348 START TEST event 00:06:23.348 ************************************ 00:06:23.348 10:55:28 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:23.348 * Looking for test storage... 00:06:23.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:23.606 10:55:28 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.606 10:55:28 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.606 10:55:28 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.606 10:55:28 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.606 10:55:28 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.606 10:55:28 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.606 10:55:28 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.606 10:55:28 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.606 10:55:28 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.606 10:55:28 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.606 10:55:28 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.606 10:55:28 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.606 10:55:28 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.606 10:55:28 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.606 10:55:28 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.606 10:55:28 event -- scripts/common.sh@344 -- # case "$op" in 00:06:23.606 10:55:28 event -- scripts/common.sh@345 -- # : 1 00:06:23.606 10:55:28 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.606 10:55:28 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.606 10:55:28 event -- scripts/common.sh@365 -- # decimal 1 00:06:23.606 10:55:28 event -- scripts/common.sh@353 -- # local d=1 00:06:23.606 10:55:28 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.606 10:55:28 event -- scripts/common.sh@355 -- # echo 1 00:06:23.606 10:55:28 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.606 10:55:28 event -- scripts/common.sh@366 -- # decimal 2 00:06:23.606 10:55:28 event -- scripts/common.sh@353 -- # local d=2 00:06:23.606 10:55:28 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.606 10:55:28 event -- scripts/common.sh@355 -- # echo 2 00:06:23.606 10:55:28 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.606 10:55:28 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.606 10:55:28 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.606 10:55:28 event -- scripts/common.sh@368 -- # return 0 00:06:23.606 10:55:28 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.606 10:55:28 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.606 --rc genhtml_branch_coverage=1 00:06:23.606 --rc genhtml_function_coverage=1 00:06:23.606 --rc genhtml_legend=1 00:06:23.606 --rc geninfo_all_blocks=1 00:06:23.606 --rc geninfo_unexecuted_blocks=1 00:06:23.606 00:06:23.606 ' 00:06:23.606 10:55:28 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.606 --rc genhtml_branch_coverage=1 00:06:23.606 --rc genhtml_function_coverage=1 00:06:23.606 --rc genhtml_legend=1 00:06:23.606 --rc geninfo_all_blocks=1 00:06:23.606 --rc geninfo_unexecuted_blocks=1 00:06:23.606 00:06:23.606 ' 00:06:23.606 10:55:28 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.606 --rc genhtml_branch_coverage=1 00:06:23.606 --rc genhtml_function_coverage=1 00:06:23.606 --rc genhtml_legend=1 00:06:23.606 --rc geninfo_all_blocks=1 00:06:23.606 --rc geninfo_unexecuted_blocks=1 00:06:23.606 00:06:23.606 ' 00:06:23.606 10:55:28 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.606 --rc genhtml_branch_coverage=1 00:06:23.606 --rc genhtml_function_coverage=1 00:06:23.606 --rc genhtml_legend=1 00:06:23.606 --rc geninfo_all_blocks=1 00:06:23.606 --rc geninfo_unexecuted_blocks=1 00:06:23.606 00:06:23.606 ' 00:06:23.606 10:55:28 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:23.606 10:55:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:23.607 10:55:28 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.607 10:55:28 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:23.607 10:55:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.607 10:55:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.607 ************************************ 00:06:23.607 START TEST event_perf 00:06:23.607 ************************************ 00:06:23.607 10:55:28 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.607 Running I/O for 1 seconds...[2024-10-29 10:55:28.984893] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:23.607 [2024-10-29 10:55:28.984981] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71166 ] 00:06:23.865 [2024-10-29 10:55:29.129360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.865 [2024-10-29 10:55:29.150779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.865 [2024-10-29 10:55:29.150887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.865 [2024-10-29 10:55:29.150890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.865 Running I/O for 1 seconds...[2024-10-29 10:55:29.150840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.799 00:06:24.799 lcore 0: 193812 00:06:24.799 lcore 1: 193811 00:06:24.799 lcore 2: 193811 00:06:24.799 lcore 3: 193813 00:06:24.799 done. 00:06:24.799 00:06:24.799 real 0m1.217s 00:06:24.799 user 0m4.044s 00:06:24.799 sys 0m0.037s 00:06:24.799 ************************************ 00:06:24.799 END TEST event_perf 00:06:24.799 ************************************ 00:06:24.799 10:55:30 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.799 10:55:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.799 10:55:30 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:24.799 10:55:30 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:24.799 10:55:30 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.799 10:55:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.799 ************************************ 00:06:24.799 START TEST event_reactor 00:06:24.799 ************************************ 00:06:24.799 10:55:30 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:24.799 [2024-10-29 10:55:30.248182] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:24.799 [2024-10-29 10:55:30.248898] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71199 ] 00:06:25.058 [2024-10-29 10:55:30.393867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.058 [2024-10-29 10:55:30.414857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.995 test_start 00:06:25.995 oneshot 00:06:25.995 tick 100 00:06:25.995 tick 100 00:06:25.995 tick 250 00:06:25.995 tick 100 00:06:25.995 tick 100 00:06:25.995 tick 100 00:06:25.995 tick 250 00:06:25.995 tick 500 00:06:25.995 tick 100 00:06:25.995 tick 100 00:06:25.995 tick 250 00:06:25.995 tick 100 00:06:25.995 tick 100 00:06:25.995 test_end 00:06:25.995 00:06:25.995 real 0m1.215s 00:06:25.995 user 0m1.073s 00:06:25.995 sys 0m0.036s 00:06:25.995 10:55:31 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.995 10:55:31 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:25.995 ************************************ 00:06:25.995 END TEST event_reactor 00:06:25.995 ************************************ 00:06:25.995 10:55:31 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.995 10:55:31 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:25.995 10:55:31 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.995 10:55:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.254 ************************************ 00:06:26.254 START TEST event_reactor_perf 00:06:26.254 ************************************ 00:06:26.254 10:55:31 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:26.254 [2024-10-29 10:55:31.514362] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:26.254 [2024-10-29 10:55:31.515138] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71240 ] 00:06:26.254 [2024-10-29 10:55:31.667327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.254 [2024-10-29 10:55:31.691480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.634 test_start 00:06:27.634 test_end 00:06:27.634 Performance: 424620 events per second 00:06:27.634 00:06:27.634 real 0m1.226s 00:06:27.634 user 0m1.084s 00:06:27.634 sys 0m0.036s 00:06:27.634 ************************************ 00:06:27.634 END TEST event_reactor_perf 00:06:27.634 ************************************ 00:06:27.634 10:55:32 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.634 10:55:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.634 10:55:32 event -- event/event.sh@49 -- # uname -s 00:06:27.634 10:55:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:27.634 10:55:32 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:27.634 10:55:32 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.634 10:55:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.634 10:55:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.634 ************************************ 00:06:27.634 START TEST event_scheduler 00:06:27.634 ************************************ 00:06:27.634 10:55:32 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:27.634 * Looking for test storage... 00:06:27.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:27.634 10:55:32 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.634 10:55:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.634 10:55:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.634 10:55:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.634 10:55:32 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.634 10:55:32 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.634 10:55:32 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.634 10:55:32 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.634 10:55:32 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.634 10:55:32 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.634 10:55:32 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.634 10:55:32 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.634 10:55:32 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.634 10:55:32 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.635 10:55:32 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.635 --rc genhtml_branch_coverage=1 00:06:27.635 --rc genhtml_function_coverage=1 00:06:27.635 --rc genhtml_legend=1 00:06:27.635 --rc geninfo_all_blocks=1 00:06:27.635 --rc geninfo_unexecuted_blocks=1 00:06:27.635 00:06:27.635 ' 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.635 --rc genhtml_branch_coverage=1 00:06:27.635 --rc genhtml_function_coverage=1 00:06:27.635 --rc genhtml_legend=1 00:06:27.635 --rc geninfo_all_blocks=1 00:06:27.635 --rc geninfo_unexecuted_blocks=1 00:06:27.635 00:06:27.635 ' 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.635 --rc genhtml_branch_coverage=1 00:06:27.635 --rc genhtml_function_coverage=1 00:06:27.635 --rc genhtml_legend=1 00:06:27.635 --rc geninfo_all_blocks=1 00:06:27.635 --rc geninfo_unexecuted_blocks=1 00:06:27.635 00:06:27.635 ' 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.635 --rc genhtml_branch_coverage=1 00:06:27.635 --rc genhtml_function_coverage=1 00:06:27.635 --rc genhtml_legend=1 00:06:27.635 --rc geninfo_all_blocks=1 00:06:27.635 --rc geninfo_unexecuted_blocks=1 00:06:27.635 00:06:27.635 ' 00:06:27.635 10:55:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:27.635 10:55:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:27.635 10:55:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71304 00:06:27.635 10:55:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.635 10:55:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71304 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 71304 ']' 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.635 10:55:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.635 [2024-10-29 10:55:33.000808] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:27.635 [2024-10-29 10:55:33.001081] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71304 ] 00:06:27.895 [2024-10-29 10:55:33.150682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.895 [2024-10-29 10:55:33.178006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.895 [2024-10-29 10:55:33.178124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.895 [2024-10-29 10:55:33.179109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.895 [2024-10-29 10:55:33.179162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:27.895 10:55:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.895 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.895 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.895 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.895 POWER: Cannot set governor of lcore 0 to performance 00:06:27.895 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.895 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.895 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.895 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.895 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:27.895 POWER: Unable to set Power Management Environment for lcore 0 00:06:27.895 [2024-10-29 10:55:33.301163] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:27.895 [2024-10-29 10:55:33.301316] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:27.895 [2024-10-29 10:55:33.301408] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:27.895 [2024-10-29 10:55:33.301504] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:27.895 [2024-10-29 10:55:33.301543] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:27.895 [2024-10-29 10:55:33.301627] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.895 10:55:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.895 [2024-10-29 10:55:33.334157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.895 [2024-10-29 10:55:33.349840] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.895 10:55:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.895 10:55:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.895 ************************************ 00:06:27.895 START TEST scheduler_create_thread 00:06:27.895 ************************************ 00:06:27.895 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:27.895 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:27.896 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.896 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.896 2 00:06:27.896 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.896 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:27.896 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.896 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.896 3 00:06:27.896 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.896 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:27.896 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.896 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.155 4 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.155 5 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.155 6 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.155 7 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.155 8 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.155 9 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.155 10 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.155 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.722 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.722 10:55:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:28.722 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.722 10:55:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.099 10:55:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.099 10:55:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:30.099 10:55:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:30.099 10:55:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.099 10:55:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.034 ************************************ 00:06:31.034 END TEST scheduler_create_thread 00:06:31.034 ************************************ 00:06:31.034 10:55:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.034 00:06:31.034 real 0m3.095s 00:06:31.034 user 0m0.023s 00:06:31.034 sys 0m0.004s 00:06:31.034 10:55:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.034 10:55:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.034 10:55:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:31.034 10:55:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71304 00:06:31.034 10:55:36 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 71304 ']' 00:06:31.034 10:55:36 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 71304 00:06:31.034 10:55:36 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:31.034 10:55:36 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.034 10:55:36 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71304 00:06:31.293 killing process with pid 71304 00:06:31.293 10:55:36 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:31.293 10:55:36 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:31.293 10:55:36 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71304' 00:06:31.293 10:55:36 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 71304 00:06:31.293 10:55:36 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 71304 00:06:31.553 [2024-10-29 10:55:36.837427] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:31.553 ************************************ 00:06:31.553 END TEST event_scheduler 00:06:31.553 ************************************ 00:06:31.553 00:06:31.553 real 0m4.203s 00:06:31.553 user 0m6.789s 00:06:31.553 sys 0m0.339s 00:06:31.553 10:55:36 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.553 10:55:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:31.553 10:55:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:31.553 10:55:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:31.553 10:55:37 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:31.553 10:55:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.553 10:55:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.553 ************************************ 00:06:31.553 START TEST app_repeat 00:06:31.553 ************************************ 00:06:31.553 10:55:37 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:31.553 Process app_repeat pid: 71396 00:06:31.553 spdk_app_start Round 0 00:06:31.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71396 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71396' 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:31.553 10:55:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71396 /var/tmp/spdk-nbd.sock 00:06:31.553 10:55:37 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71396 ']' 00:06:31.553 10:55:37 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.553 10:55:37 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.553 10:55:37 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.553 10:55:37 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.553 10:55:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.813 [2024-10-29 10:55:37.073783] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:31.813 [2024-10-29 10:55:37.073874] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71396 ] 00:06:31.813 [2024-10-29 10:55:37.220570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.813 [2024-10-29 10:55:37.241438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.813 [2024-10-29 10:55:37.241444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.813 [2024-10-29 10:55:37.274112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.072 10:55:37 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.072 10:55:37 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:32.072 10:55:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.332 Malloc0 00:06:32.332 10:55:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.591 Malloc1 00:06:32.591 10:55:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.591 10:55:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.851 /dev/nbd0 00:06:32.851 10:55:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.851 10:55:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.851 1+0 records in 00:06:32.851 1+0 records out 00:06:32.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330379 s, 12.4 MB/s 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:32.851 10:55:38 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:32.851 10:55:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.851 10:55:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.851 10:55:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.111 /dev/nbd1 00:06:33.111 10:55:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.111 10:55:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.111 1+0 records in 00:06:33.111 1+0 records out 00:06:33.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205396 s, 19.9 MB/s 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:33.111 10:55:38 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:33.111 10:55:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.111 10:55:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.111 10:55:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.111 10:55:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.111 10:55:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.370 10:55:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.370 { 00:06:33.370 "nbd_device": "/dev/nbd0", 00:06:33.370 "bdev_name": "Malloc0" 00:06:33.370 }, 00:06:33.370 { 00:06:33.370 "nbd_device": "/dev/nbd1", 00:06:33.370 "bdev_name": "Malloc1" 00:06:33.370 } 00:06:33.370 ]' 00:06:33.370 10:55:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.370 { 00:06:33.370 "nbd_device": "/dev/nbd0", 00:06:33.370 "bdev_name": "Malloc0" 00:06:33.370 }, 00:06:33.370 { 00:06:33.370 "nbd_device": "/dev/nbd1", 00:06:33.370 "bdev_name": "Malloc1" 00:06:33.370 } 00:06:33.370 ]' 00:06:33.370 10:55:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.631 /dev/nbd1' 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.631 /dev/nbd1' 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.631 256+0 records in 00:06:33.631 256+0 records out 00:06:33.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451971 s, 232 MB/s 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.631 256+0 records in 00:06:33.631 256+0 records out 00:06:33.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230068 s, 45.6 MB/s 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.631 256+0 records in 00:06:33.631 256+0 records out 00:06:33.631 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291273 s, 36.0 MB/s 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.631 10:55:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.902 10:55:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.902 10:55:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.902 10:55:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.902 10:55:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.902 10:55:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.902 10:55:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.902 10:55:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.902 10:55:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.902 10:55:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.902 10:55:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.161 10:55:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.419 10:55:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.419 10:55:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.419 10:55:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.682 10:55:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.682 10:55:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.682 10:55:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.682 10:55:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.682 10:55:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.682 10:55:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.682 10:55:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.682 10:55:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.682 10:55:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.682 10:55:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.942 10:55:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.942 [2024-10-29 10:55:40.334325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.942 [2024-10-29 10:55:40.352748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.942 [2024-10-29 10:55:40.352759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.942 [2024-10-29 10:55:40.381115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.942 [2024-10-29 10:55:40.381202] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.942 [2024-10-29 10:55:40.381214] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.237 spdk_app_start Round 1 00:06:38.237 10:55:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.237 10:55:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:38.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.237 10:55:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71396 /var/tmp/spdk-nbd.sock 00:06:38.237 10:55:43 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71396 ']' 00:06:38.237 10:55:43 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.237 10:55:43 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:38.237 10:55:43 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.237 10:55:43 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:38.237 10:55:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.237 10:55:43 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:38.237 10:55:43 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:38.237 10:55:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.495 Malloc0 00:06:38.495 10:55:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.495 Malloc1 00:06:38.755 10:55:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.755 10:55:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.014 /dev/nbd0 00:06:39.014 10:55:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.014 10:55:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.014 1+0 records in 00:06:39.014 1+0 records out 00:06:39.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589237 s, 7.0 MB/s 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:39.014 10:55:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:39.014 10:55:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.014 10:55:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.014 10:55:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.272 /dev/nbd1 00:06:39.272 10:55:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.272 10:55:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.272 1+0 records in 00:06:39.272 1+0 records out 00:06:39.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000833391 s, 4.9 MB/s 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:39.272 10:55:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:39.272 10:55:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.272 10:55:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.272 10:55:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.272 10:55:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.272 10:55:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.530 { 00:06:39.530 "nbd_device": "/dev/nbd0", 00:06:39.530 "bdev_name": "Malloc0" 00:06:39.530 }, 00:06:39.530 { 00:06:39.530 "nbd_device": "/dev/nbd1", 00:06:39.530 "bdev_name": "Malloc1" 00:06:39.530 } 00:06:39.530 ]' 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.530 { 00:06:39.530 "nbd_device": "/dev/nbd0", 00:06:39.530 "bdev_name": "Malloc0" 00:06:39.530 }, 00:06:39.530 { 00:06:39.530 "nbd_device": "/dev/nbd1", 00:06:39.530 "bdev_name": "Malloc1" 00:06:39.530 } 00:06:39.530 ]' 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.530 /dev/nbd1' 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.530 /dev/nbd1' 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.530 256+0 records in 00:06:39.530 256+0 records out 00:06:39.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00734636 s, 143 MB/s 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.530 256+0 records in 00:06:39.530 256+0 records out 00:06:39.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239225 s, 43.8 MB/s 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.530 10:55:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.530 256+0 records in 00:06:39.530 256+0 records out 00:06:39.530 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253891 s, 41.3 MB/s 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.787 10:55:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.788 10:55:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.788 10:55:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.788 10:55:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.788 10:55:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.788 10:55:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.788 10:55:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.788 10:55:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.788 10:55:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.046 10:55:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.046 10:55:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.046 10:55:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.046 10:55:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.046 10:55:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.046 10:55:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.046 10:55:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.046 10:55:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.046 10:55:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.046 10:55:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.305 10:55:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.565 10:55:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.565 10:55:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.132 10:55:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:41.132 [2024-10-29 10:55:46.416789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.132 [2024-10-29 10:55:46.437659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.132 [2024-10-29 10:55:46.437671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.132 [2024-10-29 10:55:46.471026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.132 [2024-10-29 10:55:46.471168] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.132 [2024-10-29 10:55:46.471182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.421 spdk_app_start Round 2 00:06:44.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.421 10:55:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.421 10:55:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:44.421 10:55:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71396 /var/tmp/spdk-nbd.sock 00:06:44.421 10:55:49 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71396 ']' 00:06:44.421 10:55:49 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.421 10:55:49 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:44.421 10:55:49 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.421 10:55:49 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:44.421 10:55:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.421 10:55:49 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:44.421 10:55:49 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:44.421 10:55:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.421 Malloc0 00:06:44.679 10:55:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.938 Malloc1 00:06:44.938 10:55:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.938 10:55:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.197 /dev/nbd0 00:06:45.197 10:55:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.197 10:55:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.197 1+0 records in 00:06:45.197 1+0 records out 00:06:45.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276811 s, 14.8 MB/s 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:45.197 10:55:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:45.197 10:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.197 10:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.197 10:55:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.456 /dev/nbd1 00:06:45.456 10:55:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.456 10:55:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.456 1+0 records in 00:06:45.456 1+0 records out 00:06:45.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303332 s, 13.5 MB/s 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:45.456 10:55:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:45.456 10:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.456 10:55:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.456 10:55:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.456 10:55:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.456 10:55:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.023 { 00:06:46.023 "nbd_device": "/dev/nbd0", 00:06:46.023 "bdev_name": "Malloc0" 00:06:46.023 }, 00:06:46.023 { 00:06:46.023 "nbd_device": "/dev/nbd1", 00:06:46.023 "bdev_name": "Malloc1" 00:06:46.023 } 00:06:46.023 ]' 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.023 { 00:06:46.023 "nbd_device": "/dev/nbd0", 00:06:46.023 "bdev_name": "Malloc0" 00:06:46.023 }, 00:06:46.023 { 00:06:46.023 "nbd_device": "/dev/nbd1", 00:06:46.023 "bdev_name": "Malloc1" 00:06:46.023 } 00:06:46.023 ]' 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.023 /dev/nbd1' 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.023 /dev/nbd1' 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.023 256+0 records in 00:06:46.023 256+0 records out 00:06:46.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010914 s, 96.1 MB/s 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.023 256+0 records in 00:06:46.023 256+0 records out 00:06:46.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250156 s, 41.9 MB/s 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.023 256+0 records in 00:06:46.023 256+0 records out 00:06:46.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256038 s, 41.0 MB/s 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.023 10:55:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.286 10:55:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.286 10:55:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.286 10:55:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.286 10:55:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.286 10:55:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.286 10:55:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.286 10:55:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.286 10:55:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.286 10:55:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.286 10:55:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.544 10:55:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.803 10:55:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.803 10:55:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.061 10:55:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.320 [2024-10-29 10:55:52.633859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.320 [2024-10-29 10:55:52.652184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.320 [2024-10-29 10:55:52.652194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.320 [2024-10-29 10:55:52.680526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.320 [2024-10-29 10:55:52.680635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.320 [2024-10-29 10:55:52.680648] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.605 10:55:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71396 /var/tmp/spdk-nbd.sock 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71396 ']' 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:50.605 10:55:55 event.app_repeat -- event/event.sh@39 -- # killprocess 71396 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 71396 ']' 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 71396 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71396 00:06:50.605 killing process with pid 71396 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71396' 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@971 -- # kill 71396 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@976 -- # wait 71396 00:06:50.605 spdk_app_start is called in Round 0. 00:06:50.605 Shutdown signal received, stop current app iteration 00:06:50.605 Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 reinitialization... 00:06:50.605 spdk_app_start is called in Round 1. 00:06:50.605 Shutdown signal received, stop current app iteration 00:06:50.605 Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 reinitialization... 00:06:50.605 spdk_app_start is called in Round 2. 00:06:50.605 Shutdown signal received, stop current app iteration 00:06:50.605 Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 reinitialization... 00:06:50.605 spdk_app_start is called in Round 3. 00:06:50.605 Shutdown signal received, stop current app iteration 00:06:50.605 10:55:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:50.605 10:55:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:50.605 00:06:50.605 real 0m18.950s 00:06:50.605 user 0m43.806s 00:06:50.605 sys 0m2.641s 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.605 ************************************ 00:06:50.605 END TEST app_repeat 00:06:50.605 ************************************ 00:06:50.605 10:55:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.605 10:55:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:50.605 10:55:56 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:50.605 10:55:56 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.605 10:55:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.606 10:55:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.606 ************************************ 00:06:50.606 START TEST cpu_locks 00:06:50.606 ************************************ 00:06:50.606 10:55:56 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:50.865 * Looking for test storage... 00:06:50.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:50.865 10:55:56 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:50.865 10:55:56 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:50.865 10:55:56 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:50.865 10:55:56 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.865 10:55:56 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:50.865 10:55:56 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.865 10:55:56 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:50.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.865 --rc genhtml_branch_coverage=1 00:06:50.865 --rc genhtml_function_coverage=1 00:06:50.865 --rc genhtml_legend=1 00:06:50.866 --rc geninfo_all_blocks=1 00:06:50.866 --rc geninfo_unexecuted_blocks=1 00:06:50.866 00:06:50.866 ' 00:06:50.866 10:55:56 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:50.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.866 --rc genhtml_branch_coverage=1 00:06:50.866 --rc genhtml_function_coverage=1 00:06:50.866 --rc genhtml_legend=1 00:06:50.866 --rc geninfo_all_blocks=1 00:06:50.866 --rc geninfo_unexecuted_blocks=1 00:06:50.866 00:06:50.866 ' 00:06:50.866 10:55:56 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:50.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.866 --rc genhtml_branch_coverage=1 00:06:50.866 --rc genhtml_function_coverage=1 00:06:50.866 --rc genhtml_legend=1 00:06:50.866 --rc geninfo_all_blocks=1 00:06:50.866 --rc geninfo_unexecuted_blocks=1 00:06:50.866 00:06:50.866 ' 00:06:50.866 10:55:56 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:50.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.866 --rc genhtml_branch_coverage=1 00:06:50.866 --rc genhtml_function_coverage=1 00:06:50.866 --rc genhtml_legend=1 00:06:50.866 --rc geninfo_all_blocks=1 00:06:50.866 --rc geninfo_unexecuted_blocks=1 00:06:50.866 00:06:50.866 ' 00:06:50.866 10:55:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:50.866 10:55:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:50.866 10:55:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:50.866 10:55:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:50.866 10:55:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.866 10:55:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.866 10:55:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.866 ************************************ 00:06:50.866 START TEST default_locks 00:06:50.866 ************************************ 00:06:50.866 10:55:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:50.866 10:55:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71842 00:06:50.866 10:55:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71842 00:06:50.866 10:55:56 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 71842 ']' 00:06:50.866 10:55:56 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.866 10:55:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.866 10:55:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.866 10:55:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.866 10:55:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.866 10:55:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.866 [2024-10-29 10:55:56.301113] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:50.866 [2024-10-29 10:55:56.301268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71842 ] 00:06:51.125 [2024-10-29 10:55:56.446820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.125 [2024-10-29 10:55:56.466003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.125 [2024-10-29 10:55:56.501857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.059 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.059 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:52.059 10:55:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71842 00:06:52.059 10:55:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.059 10:55:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71842 00:06:52.317 10:55:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71842 00:06:52.317 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 71842 ']' 00:06:52.317 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 71842 00:06:52.317 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:52.318 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:52.318 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71842 00:06:52.318 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:52.318 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:52.318 killing process with pid 71842 00:06:52.318 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71842' 00:06:52.318 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 71842 00:06:52.318 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 71842 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71842 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71842 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:52.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71842 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 71842 ']' 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.576 ERROR: process (pid: 71842) is no longer running 00:06:52.576 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (71842) - No such process 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:52.576 00:06:52.576 real 0m1.743s 00:06:52.576 user 0m2.015s 00:06:52.576 sys 0m0.458s 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.576 10:55:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.576 ************************************ 00:06:52.576 END TEST default_locks 00:06:52.576 ************************************ 00:06:52.576 10:55:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:52.576 10:55:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.576 10:55:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.576 10:55:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.576 ************************************ 00:06:52.576 START TEST default_locks_via_rpc 00:06:52.576 ************************************ 00:06:52.576 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:52.576 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71888 00:06:52.576 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71888 00:06:52.576 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 71888 ']' 00:06:52.576 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.576 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.576 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.577 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.577 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.577 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.835 [2024-10-29 10:55:58.094678] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:52.835 [2024-10-29 10:55:58.094780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71888 ] 00:06:52.835 [2024-10-29 10:55:58.242245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.835 [2024-10-29 10:55:58.261318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.835 [2024-10-29 10:55:58.296463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71888 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.095 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71888 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71888 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 71888 ']' 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 71888 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71888 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:53.662 killing process with pid 71888 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71888' 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 71888 00:06:53.662 10:55:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 71888 00:06:53.662 00:06:53.662 real 0m1.114s 00:06:53.662 user 0m1.181s 00:06:53.662 sys 0m0.424s 00:06:53.662 10:55:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:53.662 10:55:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.662 ************************************ 00:06:53.662 END TEST default_locks_via_rpc 00:06:53.662 ************************************ 00:06:53.922 10:55:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:53.922 10:55:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:53.922 10:55:59 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:53.922 10:55:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.922 ************************************ 00:06:53.922 START TEST non_locking_app_on_locked_coremask 00:06:53.922 ************************************ 00:06:53.922 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:53.922 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71932 00:06:53.922 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71932 /var/tmp/spdk.sock 00:06:53.922 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.922 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 71932 ']' 00:06:53.922 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.922 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.922 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.922 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.922 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.922 [2024-10-29 10:55:59.256862] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:53.922 [2024-10-29 10:55:59.256975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71932 ] 00:06:53.922 [2024-10-29 10:55:59.403956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.186 [2024-10-29 10:55:59.426448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.186 [2024-10-29 10:55:59.464513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71940 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71940 /var/tmp/spdk2.sock 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 71940 ']' 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.186 10:55:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.186 [2024-10-29 10:55:59.620853] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:54.186 [2024-10-29 10:55:59.620957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71940 ] 00:06:54.444 [2024-10-29 10:55:59.779048] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.444 [2024-10-29 10:55:59.779108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.444 [2024-10-29 10:55:59.815615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.444 [2024-10-29 10:55:59.890988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.703 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.703 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:54.703 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71932 00:06:54.703 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71932 00:06:54.703 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71932 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 71932 ']' 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 71932 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71932 00:06:55.640 killing process with pid 71932 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71932' 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 71932 00:06:55.640 10:56:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 71932 00:06:55.898 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71940 00:06:55.898 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 71940 ']' 00:06:55.898 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 71940 00:06:55.898 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:55.898 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.898 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71940 00:06:56.157 killing process with pid 71940 00:06:56.157 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:56.157 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:56.157 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71940' 00:06:56.157 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 71940 00:06:56.157 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 71940 00:06:56.157 00:06:56.157 real 0m2.456s 00:06:56.157 user 0m2.751s 00:06:56.157 sys 0m0.823s 00:06:56.157 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:56.157 10:56:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.157 ************************************ 00:06:56.157 END TEST non_locking_app_on_locked_coremask 00:06:56.157 ************************************ 00:06:56.416 10:56:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:56.416 10:56:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.416 10:56:01 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.416 10:56:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.416 ************************************ 00:06:56.416 START TEST locking_app_on_unlocked_coremask 00:06:56.416 ************************************ 00:06:56.416 10:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:56.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.416 10:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71994 00:06:56.416 10:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:56.416 10:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71994 /var/tmp/spdk.sock 00:06:56.416 10:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 71994 ']' 00:06:56.416 10:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.416 10:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.416 10:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.416 10:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.416 10:56:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.416 [2024-10-29 10:56:01.763629] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:56.416 [2024-10-29 10:56:01.763926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71994 ] 00:06:56.416 [2024-10-29 10:56:01.912838] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.416 [2024-10-29 10:56:01.913099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.676 [2024-10-29 10:56:01.934132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.676 [2024-10-29 10:56:01.970388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71997 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71997 /var/tmp/spdk2.sock 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 71997 ']' 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.676 10:56:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.676 [2024-10-29 10:56:02.158921] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:56.676 [2024-10-29 10:56:02.159225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71997 ] 00:06:56.934 [2024-10-29 10:56:02.315183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.934 [2024-10-29 10:56:02.355126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.934 [2024-10-29 10:56:02.427420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.871 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:57.871 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:57.871 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71997 00:06:57.871 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71997 00:06:57.871 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71994 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 71994 ']' 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 71994 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71994 00:06:58.439 killing process with pid 71994 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71994' 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 71994 00:06:58.439 10:56:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 71994 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71997 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 71997 ']' 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 71997 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71997 00:06:59.006 killing process with pid 71997 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71997' 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 71997 00:06:59.006 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 71997 00:06:59.265 ************************************ 00:06:59.265 END TEST locking_app_on_unlocked_coremask 00:06:59.265 ************************************ 00:06:59.265 00:06:59.265 real 0m2.855s 00:06:59.265 user 0m3.359s 00:06:59.265 sys 0m0.839s 00:06:59.265 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.265 10:56:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.265 10:56:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:59.265 10:56:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.265 10:56:04 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.265 10:56:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.265 ************************************ 00:06:59.265 START TEST locking_app_on_locked_coremask 00:06:59.265 ************************************ 00:06:59.265 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:59.265 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72064 00:06:59.266 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72064 /var/tmp/spdk.sock 00:06:59.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.266 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72064 ']' 00:06:59.266 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.266 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.266 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.266 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.266 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.266 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.266 [2024-10-29 10:56:04.667269] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:59.266 [2024-10-29 10:56:04.667843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72064 ] 00:06:59.525 [2024-10-29 10:56:04.818504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.525 [2024-10-29 10:56:04.840339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.525 [2024-10-29 10:56:04.878156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72067 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72067 /var/tmp/spdk2.sock 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72067 /var/tmp/spdk2.sock 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72067 /var/tmp/spdk2.sock 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72067 ']' 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.525 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.526 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.526 10:56:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.785 [2024-10-29 10:56:05.071555] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:06:59.785 [2024-10-29 10:56:05.071864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72067 ] 00:06:59.785 [2024-10-29 10:56:05.231090] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72064 has claimed it. 00:06:59.785 [2024-10-29 10:56:05.231156] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.352 ERROR: process (pid: 72067) is no longer running 00:07:00.352 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (72067) - No such process 00:07:00.353 10:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.353 10:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:00.353 10:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:00.353 10:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.353 10:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.353 10:56:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.353 10:56:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72064 00:07:00.353 10:56:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72064 00:07:00.353 10:56:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72064 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72064 ']' 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 72064 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72064 00:07:00.920 killing process with pid 72064 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72064' 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 72064 00:07:00.920 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 72064 00:07:01.179 ************************************ 00:07:01.179 END TEST locking_app_on_locked_coremask 00:07:01.179 ************************************ 00:07:01.179 00:07:01.179 real 0m1.889s 00:07:01.179 user 0m2.270s 00:07:01.179 sys 0m0.511s 00:07:01.179 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.179 10:56:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.179 10:56:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:01.179 10:56:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.179 10:56:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.179 10:56:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.179 ************************************ 00:07:01.179 START TEST locking_overlapped_coremask 00:07:01.179 ************************************ 00:07:01.179 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:01.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.179 10:56:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72113 00:07:01.179 10:56:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72113 /var/tmp/spdk.sock 00:07:01.179 10:56:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:01.179 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 72113 ']' 00:07:01.179 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.179 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.180 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.180 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.180 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.180 [2024-10-29 10:56:06.644062] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:01.180 [2024-10-29 10:56:06.644482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72113 ] 00:07:01.439 [2024-10-29 10:56:06.797551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.439 [2024-10-29 10:56:06.822212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.439 [2024-10-29 10:56:06.822056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.439 [2024-10-29 10:56:06.822204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.439 [2024-10-29 10:56:06.861396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72123 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72123 /var/tmp/spdk2.sock 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72123 /var/tmp/spdk2.sock 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72123 /var/tmp/spdk2.sock 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 72123 ']' 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.699 10:56:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.699 [2024-10-29 10:56:07.052843] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:01.699 [2024-10-29 10:56:07.052931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72123 ] 00:07:01.958 [2024-10-29 10:56:07.216492] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72113 has claimed it. 00:07:01.958 [2024-10-29 10:56:07.216559] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:02.525 ERROR: process (pid: 72123) is no longer running 00:07:02.525 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (72123) - No such process 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72113 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 72113 ']' 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 72113 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72113 00:07:02.525 killing process with pid 72113 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72113' 00:07:02.525 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 72113 00:07:02.526 10:56:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 72113 00:07:02.784 ************************************ 00:07:02.784 END TEST locking_overlapped_coremask 00:07:02.784 ************************************ 00:07:02.784 00:07:02.784 real 0m1.498s 00:07:02.784 user 0m4.142s 00:07:02.784 sys 0m0.325s 00:07:02.784 10:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.784 10:56:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.785 10:56:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:02.785 10:56:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.785 10:56:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.785 10:56:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.785 ************************************ 00:07:02.785 START TEST locking_overlapped_coremask_via_rpc 00:07:02.785 ************************************ 00:07:02.785 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:02.785 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72163 00:07:02.785 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:02.785 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72163 /var/tmp/spdk.sock 00:07:02.785 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72163 ']' 00:07:02.785 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.785 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:02.785 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.785 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:02.785 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.785 [2024-10-29 10:56:08.146515] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:02.785 [2024-10-29 10:56:08.146607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72163 ] 00:07:03.044 [2024-10-29 10:56:08.283682] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.044 [2024-10-29 10:56:08.283732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.044 [2024-10-29 10:56:08.306713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.044 [2024-10-29 10:56:08.306846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.044 [2024-10-29 10:56:08.306851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.044 [2024-10-29 10:56:08.347384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72174 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72174 /var/tmp/spdk2.sock 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72174 ']' 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.044 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.044 [2024-10-29 10:56:08.513308] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:03.044 [2024-10-29 10:56:08.513398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72174 ] 00:07:03.303 [2024-10-29 10:56:08.667382] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.303 [2024-10-29 10:56:08.671409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.303 [2024-10-29 10:56:08.712072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.303 [2024-10-29 10:56:08.715561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.303 [2024-10-29 10:56:08.715561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:03.303 [2024-10-29 10:56:08.790582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.561 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.562 10:56:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.562 [2024-10-29 10:56:09.008570] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72163 has claimed it. 00:07:03.562 request: 00:07:03.562 { 00:07:03.562 "method": "framework_enable_cpumask_locks", 00:07:03.562 "req_id": 1 00:07:03.562 } 00:07:03.562 Got JSON-RPC error response 00:07:03.562 response: 00:07:03.562 { 00:07:03.562 "code": -32603, 00:07:03.562 "message": "Failed to claim CPU core: 2" 00:07:03.562 } 00:07:03.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72163 /var/tmp/spdk.sock 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72163 ']' 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.562 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.821 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.821 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:03.821 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72174 /var/tmp/spdk2.sock 00:07:03.821 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72174 ']' 00:07:03.821 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.821 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.821 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.821 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.821 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.428 ************************************ 00:07:04.428 END TEST locking_overlapped_coremask_via_rpc 00:07:04.428 ************************************ 00:07:04.428 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:04.428 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:04.428 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:04.428 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:04.428 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:04.428 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:04.428 00:07:04.428 real 0m1.517s 00:07:04.428 user 0m1.024s 00:07:04.428 sys 0m0.133s 00:07:04.428 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.428 10:56:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.428 10:56:09 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:04.428 10:56:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72163 ]] 00:07:04.428 10:56:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72163 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72163 ']' 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72163 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72163 00:07:04.428 killing process with pid 72163 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72163' 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 72163 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 72163 00:07:04.428 10:56:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72174 ]] 00:07:04.428 10:56:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72174 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72174 ']' 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72174 00:07:04.428 10:56:09 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:04.429 10:56:09 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:04.429 10:56:09 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72174 00:07:04.686 killing process with pid 72174 00:07:04.686 10:56:09 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:04.686 10:56:09 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:04.686 10:56:09 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72174' 00:07:04.686 10:56:09 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 72174 00:07:04.686 10:56:09 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 72174 00:07:04.686 10:56:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:04.686 10:56:10 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:04.686 10:56:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72163 ]] 00:07:04.686 10:56:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72163 00:07:04.686 10:56:10 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72163 ']' 00:07:04.686 10:56:10 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72163 00:07:04.686 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72163) - No such process 00:07:04.686 Process with pid 72163 is not found 00:07:04.686 10:56:10 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 72163 is not found' 00:07:04.686 Process with pid 72174 is not found 00:07:04.686 10:56:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72174 ]] 00:07:04.686 10:56:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72174 00:07:04.686 10:56:10 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72174 ']' 00:07:04.686 10:56:10 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72174 00:07:04.686 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72174) - No such process 00:07:04.686 10:56:10 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 72174 is not found' 00:07:04.686 10:56:10 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:04.686 ************************************ 00:07:04.686 END TEST cpu_locks 00:07:04.686 ************************************ 00:07:04.686 00:07:04.686 real 0m14.110s 00:07:04.686 user 0m24.555s 00:07:04.686 sys 0m4.133s 00:07:04.687 10:56:10 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.687 10:56:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.945 ************************************ 00:07:04.945 END TEST event 00:07:04.945 ************************************ 00:07:04.945 00:07:04.945 real 0m41.417s 00:07:04.945 user 1m21.560s 00:07:04.945 sys 0m7.482s 00:07:04.945 10:56:10 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.945 10:56:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.945 10:56:10 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:04.945 10:56:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:04.945 10:56:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.945 10:56:10 -- common/autotest_common.sh@10 -- # set +x 00:07:04.945 ************************************ 00:07:04.945 START TEST thread 00:07:04.945 ************************************ 00:07:04.945 10:56:10 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:04.945 * Looking for test storage... 00:07:04.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:04.945 10:56:10 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:04.945 10:56:10 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:04.945 10:56:10 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:04.945 10:56:10 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:04.945 10:56:10 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.945 10:56:10 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.945 10:56:10 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.945 10:56:10 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.945 10:56:10 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.945 10:56:10 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.946 10:56:10 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.946 10:56:10 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.946 10:56:10 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.946 10:56:10 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.946 10:56:10 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.946 10:56:10 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:04.946 10:56:10 thread -- scripts/common.sh@345 -- # : 1 00:07:04.946 10:56:10 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.946 10:56:10 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.946 10:56:10 thread -- scripts/common.sh@365 -- # decimal 1 00:07:04.946 10:56:10 thread -- scripts/common.sh@353 -- # local d=1 00:07:04.946 10:56:10 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.946 10:56:10 thread -- scripts/common.sh@355 -- # echo 1 00:07:04.946 10:56:10 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.946 10:56:10 thread -- scripts/common.sh@366 -- # decimal 2 00:07:04.946 10:56:10 thread -- scripts/common.sh@353 -- # local d=2 00:07:04.946 10:56:10 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.946 10:56:10 thread -- scripts/common.sh@355 -- # echo 2 00:07:04.946 10:56:10 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.946 10:56:10 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.946 10:56:10 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.946 10:56:10 thread -- scripts/common.sh@368 -- # return 0 00:07:04.946 10:56:10 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.946 10:56:10 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:04.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.946 --rc genhtml_branch_coverage=1 00:07:04.946 --rc genhtml_function_coverage=1 00:07:04.946 --rc genhtml_legend=1 00:07:04.946 --rc geninfo_all_blocks=1 00:07:04.946 --rc geninfo_unexecuted_blocks=1 00:07:04.946 00:07:04.946 ' 00:07:04.946 10:56:10 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:04.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.946 --rc genhtml_branch_coverage=1 00:07:04.946 --rc genhtml_function_coverage=1 00:07:04.946 --rc genhtml_legend=1 00:07:04.946 --rc geninfo_all_blocks=1 00:07:04.946 --rc geninfo_unexecuted_blocks=1 00:07:04.946 00:07:04.946 ' 00:07:04.946 10:56:10 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:04.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.946 --rc genhtml_branch_coverage=1 00:07:04.946 --rc genhtml_function_coverage=1 00:07:04.946 --rc genhtml_legend=1 00:07:04.946 --rc geninfo_all_blocks=1 00:07:04.946 --rc geninfo_unexecuted_blocks=1 00:07:04.946 00:07:04.946 ' 00:07:04.946 10:56:10 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:04.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.946 --rc genhtml_branch_coverage=1 00:07:04.946 --rc genhtml_function_coverage=1 00:07:04.946 --rc genhtml_legend=1 00:07:04.946 --rc geninfo_all_blocks=1 00:07:04.946 --rc geninfo_unexecuted_blocks=1 00:07:04.946 00:07:04.946 ' 00:07:04.946 10:56:10 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.946 10:56:10 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:04.946 10:56:10 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.946 10:56:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.946 ************************************ 00:07:04.946 START TEST thread_poller_perf 00:07:04.946 ************************************ 00:07:04.946 10:56:10 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:05.205 [2024-10-29 10:56:10.460824] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:05.205 [2024-10-29 10:56:10.461085] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72297 ] 00:07:05.205 [2024-10-29 10:56:10.607598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.205 [2024-10-29 10:56:10.625922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.205 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:06.582 [2024-10-29T10:56:12.079Z] ====================================== 00:07:06.582 [2024-10-29T10:56:12.079Z] busy:2209461910 (cyc) 00:07:06.582 [2024-10-29T10:56:12.079Z] total_run_count: 382000 00:07:06.582 [2024-10-29T10:56:12.079Z] tsc_hz: 2200000000 (cyc) 00:07:06.582 [2024-10-29T10:56:12.079Z] ====================================== 00:07:06.582 [2024-10-29T10:56:12.079Z] poller_cost: 5783 (cyc), 2628 (nsec) 00:07:06.582 00:07:06.582 ************************************ 00:07:06.582 END TEST thread_poller_perf 00:07:06.582 ************************************ 00:07:06.582 real 0m1.223s 00:07:06.582 user 0m1.081s 00:07:06.582 sys 0m0.036s 00:07:06.582 10:56:11 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.582 10:56:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.582 10:56:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:06.582 10:56:11 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:06.582 10:56:11 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.582 10:56:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.582 ************************************ 00:07:06.582 START TEST thread_poller_perf 00:07:06.582 ************************************ 00:07:06.582 10:56:11 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:06.582 [2024-10-29 10:56:11.733785] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:06.582 [2024-10-29 10:56:11.733890] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72327 ] 00:07:06.582 [2024-10-29 10:56:11.882932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.582 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:06.582 [2024-10-29 10:56:11.907055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.518 [2024-10-29T10:56:13.015Z] ====================================== 00:07:07.518 [2024-10-29T10:56:13.015Z] busy:2202088654 (cyc) 00:07:07.518 [2024-10-29T10:56:13.015Z] total_run_count: 5076000 00:07:07.518 [2024-10-29T10:56:13.015Z] tsc_hz: 2200000000 (cyc) 00:07:07.518 [2024-10-29T10:56:13.015Z] ====================================== 00:07:07.518 [2024-10-29T10:56:13.015Z] poller_cost: 433 (cyc), 196 (nsec) 00:07:07.518 00:07:07.518 real 0m1.230s 00:07:07.518 user 0m1.083s 00:07:07.518 sys 0m0.042s 00:07:07.518 10:56:12 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.518 ************************************ 00:07:07.518 END TEST thread_poller_perf 00:07:07.518 ************************************ 00:07:07.518 10:56:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.518 10:56:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:07.518 00:07:07.518 real 0m2.745s 00:07:07.518 user 0m2.306s 00:07:07.518 sys 0m0.225s 00:07:07.518 10:56:12 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.518 10:56:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.518 ************************************ 00:07:07.518 END TEST thread 00:07:07.518 ************************************ 00:07:07.777 10:56:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:07.777 10:56:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:07.777 10:56:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:07.777 10:56:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.777 10:56:13 -- common/autotest_common.sh@10 -- # set +x 00:07:07.777 ************************************ 00:07:07.777 START TEST app_cmdline 00:07:07.777 ************************************ 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:07.777 * Looking for test storage... 00:07:07.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.777 10:56:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:07.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.777 --rc genhtml_branch_coverage=1 00:07:07.777 --rc genhtml_function_coverage=1 00:07:07.777 --rc genhtml_legend=1 00:07:07.777 --rc geninfo_all_blocks=1 00:07:07.777 --rc geninfo_unexecuted_blocks=1 00:07:07.777 00:07:07.777 ' 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:07.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.777 --rc genhtml_branch_coverage=1 00:07:07.777 --rc genhtml_function_coverage=1 00:07:07.777 --rc genhtml_legend=1 00:07:07.777 --rc geninfo_all_blocks=1 00:07:07.777 --rc geninfo_unexecuted_blocks=1 00:07:07.777 00:07:07.777 ' 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:07.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.777 --rc genhtml_branch_coverage=1 00:07:07.777 --rc genhtml_function_coverage=1 00:07:07.777 --rc genhtml_legend=1 00:07:07.777 --rc geninfo_all_blocks=1 00:07:07.777 --rc geninfo_unexecuted_blocks=1 00:07:07.777 00:07:07.777 ' 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:07.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.777 --rc genhtml_branch_coverage=1 00:07:07.777 --rc genhtml_function_coverage=1 00:07:07.777 --rc genhtml_legend=1 00:07:07.777 --rc geninfo_all_blocks=1 00:07:07.777 --rc geninfo_unexecuted_blocks=1 00:07:07.777 00:07:07.777 ' 00:07:07.777 10:56:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:07.777 10:56:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72409 00:07:07.777 10:56:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:07.777 10:56:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72409 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 72409 ']' 00:07:07.777 10:56:13 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.778 10:56:13 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:07.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.778 10:56:13 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.778 10:56:13 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:07.778 10:56:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.036 [2024-10-29 10:56:13.288419] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:08.036 [2024-10-29 10:56:13.288529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72409 ] 00:07:08.036 [2024-10-29 10:56:13.438629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.036 [2024-10-29 10:56:13.459649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.036 [2024-10-29 10:56:13.494609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.295 10:56:13 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:08.295 10:56:13 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:08.295 10:56:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:08.553 { 00:07:08.553 "version": "SPDK v25.01-pre git sha1 12fc2abf1", 00:07:08.553 "fields": { 00:07:08.553 "major": 25, 00:07:08.553 "minor": 1, 00:07:08.553 "patch": 0, 00:07:08.553 "suffix": "-pre", 00:07:08.553 "commit": "12fc2abf1" 00:07:08.554 } 00:07:08.554 } 00:07:08.554 10:56:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:08.554 10:56:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:08.554 10:56:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:08.554 10:56:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:08.554 10:56:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:08.554 10:56:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.554 10:56:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.554 10:56:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:08.554 10:56:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:08.554 10:56:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:08.554 10:56:13 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.813 request: 00:07:08.813 { 00:07:08.813 "method": "env_dpdk_get_mem_stats", 00:07:08.813 "req_id": 1 00:07:08.813 } 00:07:08.813 Got JSON-RPC error response 00:07:08.813 response: 00:07:08.813 { 00:07:08.813 "code": -32601, 00:07:08.813 "message": "Method not found" 00:07:08.813 } 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.813 10:56:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72409 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 72409 ']' 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 72409 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72409 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:08.813 killing process with pid 72409 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72409' 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@971 -- # kill 72409 00:07:08.813 10:56:14 app_cmdline -- common/autotest_common.sh@976 -- # wait 72409 00:07:09.073 00:07:09.073 real 0m1.478s 00:07:09.073 user 0m2.006s 00:07:09.073 sys 0m0.349s 00:07:09.073 10:56:14 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.073 10:56:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.073 ************************************ 00:07:09.073 END TEST app_cmdline 00:07:09.073 ************************************ 00:07:09.073 10:56:14 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:09.073 10:56:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:09.073 10:56:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.073 10:56:14 -- common/autotest_common.sh@10 -- # set +x 00:07:09.073 ************************************ 00:07:09.073 START TEST version 00:07:09.073 ************************************ 00:07:09.073 10:56:14 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:09.333 * Looking for test storage... 00:07:09.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:09.333 10:56:14 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:09.333 10:56:14 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:09.333 10:56:14 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:09.333 10:56:14 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.333 10:56:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.333 10:56:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.333 10:56:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.333 10:56:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.333 10:56:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.333 10:56:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.333 10:56:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.333 10:56:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.333 10:56:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.333 10:56:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.333 10:56:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.333 10:56:14 version -- scripts/common.sh@344 -- # case "$op" in 00:07:09.333 10:56:14 version -- scripts/common.sh@345 -- # : 1 00:07:09.333 10:56:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.333 10:56:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.333 10:56:14 version -- scripts/common.sh@365 -- # decimal 1 00:07:09.333 10:56:14 version -- scripts/common.sh@353 -- # local d=1 00:07:09.333 10:56:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.333 10:56:14 version -- scripts/common.sh@355 -- # echo 1 00:07:09.333 10:56:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.333 10:56:14 version -- scripts/common.sh@366 -- # decimal 2 00:07:09.333 10:56:14 version -- scripts/common.sh@353 -- # local d=2 00:07:09.333 10:56:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.333 10:56:14 version -- scripts/common.sh@355 -- # echo 2 00:07:09.333 10:56:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.333 10:56:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.333 10:56:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.333 10:56:14 version -- scripts/common.sh@368 -- # return 0 00:07:09.333 10:56:14 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.333 10:56:14 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.333 --rc genhtml_branch_coverage=1 00:07:09.333 --rc genhtml_function_coverage=1 00:07:09.333 --rc genhtml_legend=1 00:07:09.333 --rc geninfo_all_blocks=1 00:07:09.333 --rc geninfo_unexecuted_blocks=1 00:07:09.333 00:07:09.333 ' 00:07:09.333 10:56:14 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.333 --rc genhtml_branch_coverage=1 00:07:09.333 --rc genhtml_function_coverage=1 00:07:09.333 --rc genhtml_legend=1 00:07:09.333 --rc geninfo_all_blocks=1 00:07:09.333 --rc geninfo_unexecuted_blocks=1 00:07:09.333 00:07:09.333 ' 00:07:09.333 10:56:14 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.333 --rc genhtml_branch_coverage=1 00:07:09.333 --rc genhtml_function_coverage=1 00:07:09.333 --rc genhtml_legend=1 00:07:09.333 --rc geninfo_all_blocks=1 00:07:09.333 --rc geninfo_unexecuted_blocks=1 00:07:09.333 00:07:09.333 ' 00:07:09.333 10:56:14 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.333 --rc genhtml_branch_coverage=1 00:07:09.333 --rc genhtml_function_coverage=1 00:07:09.333 --rc genhtml_legend=1 00:07:09.333 --rc geninfo_all_blocks=1 00:07:09.333 --rc geninfo_unexecuted_blocks=1 00:07:09.333 00:07:09.333 ' 00:07:09.333 10:56:14 version -- app/version.sh@17 -- # get_header_version major 00:07:09.333 10:56:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.333 10:56:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.333 10:56:14 version -- app/version.sh@14 -- # cut -f2 00:07:09.333 10:56:14 version -- app/version.sh@17 -- # major=25 00:07:09.333 10:56:14 version -- app/version.sh@18 -- # get_header_version minor 00:07:09.333 10:56:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.333 10:56:14 version -- app/version.sh@14 -- # cut -f2 00:07:09.333 10:56:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.333 10:56:14 version -- app/version.sh@18 -- # minor=1 00:07:09.333 10:56:14 version -- app/version.sh@19 -- # get_header_version patch 00:07:09.333 10:56:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.333 10:56:14 version -- app/version.sh@14 -- # cut -f2 00:07:09.333 10:56:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.333 10:56:14 version -- app/version.sh@19 -- # patch=0 00:07:09.333 10:56:14 version -- app/version.sh@20 -- # get_header_version suffix 00:07:09.333 10:56:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.333 10:56:14 version -- app/version.sh@14 -- # cut -f2 00:07:09.333 10:56:14 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.333 10:56:14 version -- app/version.sh@20 -- # suffix=-pre 00:07:09.333 10:56:14 version -- app/version.sh@22 -- # version=25.1 00:07:09.333 10:56:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:09.333 10:56:14 version -- app/version.sh@28 -- # version=25.1rc0 00:07:09.333 10:56:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:09.333 10:56:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:09.333 10:56:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:09.334 10:56:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:09.334 00:07:09.334 real 0m0.262s 00:07:09.334 user 0m0.172s 00:07:09.334 sys 0m0.129s 00:07:09.334 10:56:14 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.334 10:56:14 version -- common/autotest_common.sh@10 -- # set +x 00:07:09.334 ************************************ 00:07:09.334 END TEST version 00:07:09.334 ************************************ 00:07:09.593 10:56:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:09.593 10:56:14 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:09.593 10:56:14 -- spdk/autotest.sh@194 -- # uname -s 00:07:09.593 10:56:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:09.593 10:56:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:09.593 10:56:14 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:09.593 10:56:14 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:09.593 10:56:14 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:09.593 10:56:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:09.593 10:56:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.593 10:56:14 -- common/autotest_common.sh@10 -- # set +x 00:07:09.593 ************************************ 00:07:09.593 START TEST spdk_dd 00:07:09.593 ************************************ 00:07:09.593 10:56:14 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:09.593 * Looking for test storage... 00:07:09.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:09.593 10:56:14 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:09.593 10:56:14 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:07:09.593 10:56:14 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:09.593 10:56:15 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:09.593 10:56:15 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.593 10:56:15 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.593 --rc genhtml_branch_coverage=1 00:07:09.593 --rc genhtml_function_coverage=1 00:07:09.593 --rc genhtml_legend=1 00:07:09.593 --rc geninfo_all_blocks=1 00:07:09.593 --rc geninfo_unexecuted_blocks=1 00:07:09.593 00:07:09.593 ' 00:07:09.593 10:56:15 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.593 --rc genhtml_branch_coverage=1 00:07:09.593 --rc genhtml_function_coverage=1 00:07:09.593 --rc genhtml_legend=1 00:07:09.593 --rc geninfo_all_blocks=1 00:07:09.593 --rc geninfo_unexecuted_blocks=1 00:07:09.593 00:07:09.593 ' 00:07:09.593 10:56:15 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.593 --rc genhtml_branch_coverage=1 00:07:09.593 --rc genhtml_function_coverage=1 00:07:09.593 --rc genhtml_legend=1 00:07:09.593 --rc geninfo_all_blocks=1 00:07:09.593 --rc geninfo_unexecuted_blocks=1 00:07:09.593 00:07:09.593 ' 00:07:09.593 10:56:15 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.593 --rc genhtml_branch_coverage=1 00:07:09.593 --rc genhtml_function_coverage=1 00:07:09.593 --rc genhtml_legend=1 00:07:09.593 --rc geninfo_all_blocks=1 00:07:09.593 --rc geninfo_unexecuted_blocks=1 00:07:09.593 00:07:09.593 ' 00:07:09.593 10:56:15 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.593 10:56:15 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.593 10:56:15 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.593 10:56:15 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.594 10:56:15 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.594 10:56:15 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:09.594 10:56:15 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.594 10:56:15 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:10.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:10.163 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:10.163 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:10.163 10:56:15 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:10.163 10:56:15 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:10.163 10:56:15 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:10.163 10:56:15 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.1 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.163 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:10.164 * spdk_dd linked to liburing 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:10.164 10:56:15 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:10.164 10:56:15 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:10.164 10:56:15 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:10.164 10:56:15 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:10.164 10:56:15 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:10.164 10:56:15 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:10.164 10:56:15 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:10.164 10:56:15 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:10.164 10:56:15 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:10.164 10:56:15 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:10.164 10:56:15 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:10.165 10:56:15 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:10.165 10:56:15 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:10.165 10:56:15 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:10.165 10:56:15 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:10.165 10:56:15 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:10.165 10:56:15 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:10.165 10:56:15 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:10.165 10:56:15 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:10.165 10:56:15 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.165 10:56:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:10.165 ************************************ 00:07:10.165 START TEST spdk_dd_basic_rw 00:07:10.165 ************************************ 00:07:10.165 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:10.165 * Looking for test storage... 00:07:10.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:10.165 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:10.165 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:07:10.165 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.424 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:10.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.424 --rc genhtml_branch_coverage=1 00:07:10.424 --rc genhtml_function_coverage=1 00:07:10.424 --rc genhtml_legend=1 00:07:10.424 --rc geninfo_all_blocks=1 00:07:10.424 --rc geninfo_unexecuted_blocks=1 00:07:10.424 00:07:10.424 ' 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:10.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.425 --rc genhtml_branch_coverage=1 00:07:10.425 --rc genhtml_function_coverage=1 00:07:10.425 --rc genhtml_legend=1 00:07:10.425 --rc geninfo_all_blocks=1 00:07:10.425 --rc geninfo_unexecuted_blocks=1 00:07:10.425 00:07:10.425 ' 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:10.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.425 --rc genhtml_branch_coverage=1 00:07:10.425 --rc genhtml_function_coverage=1 00:07:10.425 --rc genhtml_legend=1 00:07:10.425 --rc geninfo_all_blocks=1 00:07:10.425 --rc geninfo_unexecuted_blocks=1 00:07:10.425 00:07:10.425 ' 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:10.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.425 --rc genhtml_branch_coverage=1 00:07:10.425 --rc genhtml_function_coverage=1 00:07:10.425 --rc genhtml_legend=1 00:07:10.425 --rc geninfo_all_blocks=1 00:07:10.425 --rc geninfo_unexecuted_blocks=1 00:07:10.425 00:07:10.425 ' 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:10.425 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:10.686 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:10.686 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:10.687 ************************************ 00:07:10.687 START TEST dd_bs_lt_native_bs 00:07:10.687 ************************************ 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:10.687 10:56:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:10.687 { 00:07:10.687 "subsystems": [ 00:07:10.687 { 00:07:10.687 "subsystem": "bdev", 00:07:10.687 "config": [ 00:07:10.687 { 00:07:10.687 "params": { 00:07:10.687 "trtype": "pcie", 00:07:10.687 "traddr": "0000:00:10.0", 00:07:10.687 "name": "Nvme0" 00:07:10.687 }, 00:07:10.687 "method": "bdev_nvme_attach_controller" 00:07:10.687 }, 00:07:10.687 { 00:07:10.687 "method": "bdev_wait_for_examine" 00:07:10.687 } 00:07:10.687 ] 00:07:10.687 } 00:07:10.687 ] 00:07:10.687 } 00:07:10.687 [2024-10-29 10:56:16.018787] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:10.687 [2024-10-29 10:56:16.018917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72748 ] 00:07:10.687 [2024-10-29 10:56:16.172362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.949 [2024-10-29 10:56:16.197190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.949 [2024-10-29 10:56:16.232866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.949 [2024-10-29 10:56:16.325068] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:10.949 [2024-10-29 10:56:16.325149] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.949 [2024-10-29 10:56:16.403627] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.208 00:07:11.208 real 0m0.499s 00:07:11.208 user 0m0.332s 00:07:11.208 sys 0m0.124s 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:11.208 ************************************ 00:07:11.208 END TEST dd_bs_lt_native_bs 00:07:11.208 ************************************ 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.208 ************************************ 00:07:11.208 START TEST dd_rw 00:07:11.208 ************************************ 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:11.208 10:56:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.776 10:56:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:11.776 10:56:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:11.776 10:56:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:11.776 10:56:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:11.776 { 00:07:11.776 "subsystems": [ 00:07:11.776 { 00:07:11.776 "subsystem": "bdev", 00:07:11.776 "config": [ 00:07:11.776 { 00:07:11.776 "params": { 00:07:11.776 "trtype": "pcie", 00:07:11.776 "traddr": "0000:00:10.0", 00:07:11.776 "name": "Nvme0" 00:07:11.776 }, 00:07:11.776 "method": "bdev_nvme_attach_controller" 00:07:11.776 }, 00:07:11.776 { 00:07:11.776 "method": "bdev_wait_for_examine" 00:07:11.776 } 00:07:11.776 ] 00:07:11.776 } 00:07:11.776 ] 00:07:11.776 } 00:07:11.776 [2024-10-29 10:56:17.164157] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:11.776 [2024-10-29 10:56:17.164247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72783 ] 00:07:12.035 [2024-10-29 10:56:17.318613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.035 [2024-10-29 10:56:17.342708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.035 [2024-10-29 10:56:17.376874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.035  [2024-10-29T10:56:17.791Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:12.294 00:07:12.294 10:56:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:12.294 10:56:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:12.294 10:56:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.294 10:56:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.294 { 00:07:12.294 "subsystems": [ 00:07:12.294 { 00:07:12.294 "subsystem": "bdev", 00:07:12.294 "config": [ 00:07:12.294 { 00:07:12.294 "params": { 00:07:12.294 "trtype": "pcie", 00:07:12.294 "traddr": "0000:00:10.0", 00:07:12.294 "name": "Nvme0" 00:07:12.294 }, 00:07:12.294 "method": "bdev_nvme_attach_controller" 00:07:12.294 }, 00:07:12.294 { 00:07:12.294 "method": "bdev_wait_for_examine" 00:07:12.294 } 00:07:12.294 ] 00:07:12.294 } 00:07:12.294 ] 00:07:12.294 } 00:07:12.294 [2024-10-29 10:56:17.650156] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:12.294 [2024-10-29 10:56:17.650267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72798 ] 00:07:12.294 [2024-10-29 10:56:17.793092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.553 [2024-10-29 10:56:17.812287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.553 [2024-10-29 10:56:17.840463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.553  [2024-10-29T10:56:18.050Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:12.553 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:12.553 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:12.812 { 00:07:12.813 "subsystems": [ 00:07:12.813 { 00:07:12.813 "subsystem": "bdev", 00:07:12.813 "config": [ 00:07:12.813 { 00:07:12.813 "params": { 00:07:12.813 "trtype": "pcie", 00:07:12.813 "traddr": "0000:00:10.0", 00:07:12.813 "name": "Nvme0" 00:07:12.813 }, 00:07:12.813 "method": "bdev_nvme_attach_controller" 00:07:12.813 }, 00:07:12.813 { 00:07:12.813 "method": "bdev_wait_for_examine" 00:07:12.813 } 00:07:12.813 ] 00:07:12.813 } 00:07:12.813 ] 00:07:12.813 } 00:07:12.813 [2024-10-29 10:56:18.112149] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:12.813 [2024-10-29 10:56:18.112254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72808 ] 00:07:12.813 [2024-10-29 10:56:18.260105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.813 [2024-10-29 10:56:18.278548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.813 [2024-10-29 10:56:18.306847] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.071  [2024-10-29T10:56:18.568Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:13.071 00:07:13.072 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:13.072 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:13.072 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:13.072 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:13.072 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:13.072 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:13.072 10:56:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.639 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:13.639 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:13.639 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:13.639 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:13.639 [2024-10-29 10:56:19.117320] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:13.639 [2024-10-29 10:56:19.117438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72827 ] 00:07:13.639 { 00:07:13.639 "subsystems": [ 00:07:13.639 { 00:07:13.639 "subsystem": "bdev", 00:07:13.639 "config": [ 00:07:13.639 { 00:07:13.639 "params": { 00:07:13.639 "trtype": "pcie", 00:07:13.639 "traddr": "0000:00:10.0", 00:07:13.639 "name": "Nvme0" 00:07:13.639 }, 00:07:13.639 "method": "bdev_nvme_attach_controller" 00:07:13.639 }, 00:07:13.639 { 00:07:13.639 "method": "bdev_wait_for_examine" 00:07:13.639 } 00:07:13.639 ] 00:07:13.639 } 00:07:13.639 ] 00:07:13.639 } 00:07:13.897 [2024-10-29 10:56:19.271504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.897 [2024-10-29 10:56:19.295149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.897 [2024-10-29 10:56:19.330118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.156  [2024-10-29T10:56:19.653Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:14.156 00:07:14.156 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:14.156 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:14.156 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.156 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.156 [2024-10-29 10:56:19.588338] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:14.156 [2024-10-29 10:56:19.588455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72840 ] 00:07:14.156 { 00:07:14.156 "subsystems": [ 00:07:14.156 { 00:07:14.156 "subsystem": "bdev", 00:07:14.156 "config": [ 00:07:14.156 { 00:07:14.156 "params": { 00:07:14.156 "trtype": "pcie", 00:07:14.156 "traddr": "0000:00:10.0", 00:07:14.156 "name": "Nvme0" 00:07:14.156 }, 00:07:14.156 "method": "bdev_nvme_attach_controller" 00:07:14.156 }, 00:07:14.156 { 00:07:14.156 "method": "bdev_wait_for_examine" 00:07:14.156 } 00:07:14.156 ] 00:07:14.156 } 00:07:14.156 ] 00:07:14.156 } 00:07:14.414 [2024-10-29 10:56:19.735533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.414 [2024-10-29 10:56:19.754132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.414 [2024-10-29 10:56:19.782796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.414  [2024-10-29T10:56:20.169Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:14.672 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:14.672 10:56:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:14.672 { 00:07:14.672 "subsystems": [ 00:07:14.672 { 00:07:14.672 "subsystem": "bdev", 00:07:14.672 "config": [ 00:07:14.672 { 00:07:14.672 "params": { 00:07:14.672 "trtype": "pcie", 00:07:14.672 "traddr": "0000:00:10.0", 00:07:14.672 "name": "Nvme0" 00:07:14.672 }, 00:07:14.672 "method": "bdev_nvme_attach_controller" 00:07:14.672 }, 00:07:14.672 { 00:07:14.672 "method": "bdev_wait_for_examine" 00:07:14.672 } 00:07:14.672 ] 00:07:14.672 } 00:07:14.672 ] 00:07:14.672 } 00:07:14.672 [2024-10-29 10:56:20.051446] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:14.672 [2024-10-29 10:56:20.051550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72856 ] 00:07:14.942 [2024-10-29 10:56:20.199366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.942 [2024-10-29 10:56:20.221031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.942 [2024-10-29 10:56:20.253673] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.942  [2024-10-29T10:56:20.714Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:15.217 00:07:15.217 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:15.217 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:15.217 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:15.217 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:15.217 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:15.217 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:15.217 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:15.217 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.782 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:15.782 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:15.782 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:15.782 10:56:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:15.782 [2024-10-29 10:56:21.033671] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:15.782 [2024-10-29 10:56:21.034283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72875 ] 00:07:15.782 { 00:07:15.782 "subsystems": [ 00:07:15.782 { 00:07:15.782 "subsystem": "bdev", 00:07:15.782 "config": [ 00:07:15.782 { 00:07:15.782 "params": { 00:07:15.782 "trtype": "pcie", 00:07:15.782 "traddr": "0000:00:10.0", 00:07:15.782 "name": "Nvme0" 00:07:15.782 }, 00:07:15.782 "method": "bdev_nvme_attach_controller" 00:07:15.782 }, 00:07:15.782 { 00:07:15.782 "method": "bdev_wait_for_examine" 00:07:15.782 } 00:07:15.782 ] 00:07:15.782 } 00:07:15.782 ] 00:07:15.782 } 00:07:15.782 [2024-10-29 10:56:21.181214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.782 [2024-10-29 10:56:21.200282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.782 [2024-10-29 10:56:21.228172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.040  [2024-10-29T10:56:21.537Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:16.040 00:07:16.040 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:16.040 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:16.040 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:16.040 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.040 { 00:07:16.040 "subsystems": [ 00:07:16.040 { 00:07:16.040 "subsystem": "bdev", 00:07:16.040 "config": [ 00:07:16.040 { 00:07:16.040 "params": { 00:07:16.040 "trtype": "pcie", 00:07:16.040 "traddr": "0000:00:10.0", 00:07:16.040 "name": "Nvme0" 00:07:16.040 }, 00:07:16.040 "method": "bdev_nvme_attach_controller" 00:07:16.040 }, 00:07:16.040 { 00:07:16.040 "method": "bdev_wait_for_examine" 00:07:16.040 } 00:07:16.040 ] 00:07:16.040 } 00:07:16.040 ] 00:07:16.040 } 00:07:16.040 [2024-10-29 10:56:21.504056] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:16.040 [2024-10-29 10:56:21.504151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72888 ] 00:07:16.299 [2024-10-29 10:56:21.648435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.299 [2024-10-29 10:56:21.668494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.299 [2024-10-29 10:56:21.696182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.299  [2024-10-29T10:56:22.055Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:16.558 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:16.558 10:56:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:16.558 { 00:07:16.558 "subsystems": [ 00:07:16.558 { 00:07:16.558 "subsystem": "bdev", 00:07:16.558 "config": [ 00:07:16.558 { 00:07:16.558 "params": { 00:07:16.558 "trtype": "pcie", 00:07:16.558 "traddr": "0000:00:10.0", 00:07:16.558 "name": "Nvme0" 00:07:16.558 }, 00:07:16.558 "method": "bdev_nvme_attach_controller" 00:07:16.558 }, 00:07:16.558 { 00:07:16.558 "method": "bdev_wait_for_examine" 00:07:16.558 } 00:07:16.558 ] 00:07:16.558 } 00:07:16.558 ] 00:07:16.558 } 00:07:16.558 [2024-10-29 10:56:21.960576] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:16.558 [2024-10-29 10:56:21.960662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72904 ] 00:07:16.818 [2024-10-29 10:56:22.113785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.818 [2024-10-29 10:56:22.137862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.818 [2024-10-29 10:56:22.173555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.818  [2024-10-29T10:56:22.574Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:17.077 00:07:17.077 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:17.077 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:17.077 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:17.077 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:17.077 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:17.077 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:17.077 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.645 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:17.645 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:17.645 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.645 10:56:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:17.645 [2024-10-29 10:56:22.953700] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:17.645 [2024-10-29 10:56:22.953792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72923 ] 00:07:17.645 { 00:07:17.645 "subsystems": [ 00:07:17.645 { 00:07:17.645 "subsystem": "bdev", 00:07:17.645 "config": [ 00:07:17.645 { 00:07:17.645 "params": { 00:07:17.645 "trtype": "pcie", 00:07:17.645 "traddr": "0000:00:10.0", 00:07:17.645 "name": "Nvme0" 00:07:17.645 }, 00:07:17.645 "method": "bdev_nvme_attach_controller" 00:07:17.645 }, 00:07:17.645 { 00:07:17.645 "method": "bdev_wait_for_examine" 00:07:17.645 } 00:07:17.645 ] 00:07:17.645 } 00:07:17.645 ] 00:07:17.645 } 00:07:17.645 [2024-10-29 10:56:23.100490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.645 [2024-10-29 10:56:23.118658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.903 [2024-10-29 10:56:23.147698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.904  [2024-10-29T10:56:23.401Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:17.904 00:07:17.904 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:17.904 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:17.904 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:17.904 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.163 [2024-10-29 10:56:23.408569] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:18.163 [2024-10-29 10:56:23.408838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72931 ] 00:07:18.163 { 00:07:18.163 "subsystems": [ 00:07:18.163 { 00:07:18.163 "subsystem": "bdev", 00:07:18.163 "config": [ 00:07:18.163 { 00:07:18.163 "params": { 00:07:18.163 "trtype": "pcie", 00:07:18.163 "traddr": "0000:00:10.0", 00:07:18.163 "name": "Nvme0" 00:07:18.163 }, 00:07:18.163 "method": "bdev_nvme_attach_controller" 00:07:18.163 }, 00:07:18.163 { 00:07:18.163 "method": "bdev_wait_for_examine" 00:07:18.163 } 00:07:18.163 ] 00:07:18.163 } 00:07:18.163 ] 00:07:18.163 } 00:07:18.163 [2024-10-29 10:56:23.555682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.163 [2024-10-29 10:56:23.574124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.163 [2024-10-29 10:56:23.601873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.422  [2024-10-29T10:56:23.919Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:18.422 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:18.422 10:56:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:18.422 [2024-10-29 10:56:23.861292] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:18.422 [2024-10-29 10:56:23.861687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72952 ] 00:07:18.422 { 00:07:18.422 "subsystems": [ 00:07:18.422 { 00:07:18.422 "subsystem": "bdev", 00:07:18.422 "config": [ 00:07:18.422 { 00:07:18.422 "params": { 00:07:18.422 "trtype": "pcie", 00:07:18.422 "traddr": "0000:00:10.0", 00:07:18.422 "name": "Nvme0" 00:07:18.422 }, 00:07:18.422 "method": "bdev_nvme_attach_controller" 00:07:18.422 }, 00:07:18.422 { 00:07:18.422 "method": "bdev_wait_for_examine" 00:07:18.422 } 00:07:18.422 ] 00:07:18.422 } 00:07:18.422 ] 00:07:18.422 } 00:07:18.680 [2024-10-29 10:56:24.015312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.680 [2024-10-29 10:56:24.033635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.680 [2024-10-29 10:56:24.061458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.680  [2024-10-29T10:56:24.436Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:18.939 00:07:18.939 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:18.939 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:18.939 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:18.939 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:18.939 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:18.939 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:18.939 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:18.939 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.198 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:19.198 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:19.198 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.198 10:56:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.457 { 00:07:19.457 "subsystems": [ 00:07:19.457 { 00:07:19.457 "subsystem": "bdev", 00:07:19.457 "config": [ 00:07:19.457 { 00:07:19.457 "params": { 00:07:19.457 "trtype": "pcie", 00:07:19.457 "traddr": "0000:00:10.0", 00:07:19.457 "name": "Nvme0" 00:07:19.457 }, 00:07:19.457 "method": "bdev_nvme_attach_controller" 00:07:19.457 }, 00:07:19.457 { 00:07:19.457 "method": "bdev_wait_for_examine" 00:07:19.457 } 00:07:19.457 ] 00:07:19.457 } 00:07:19.457 ] 00:07:19.457 } 00:07:19.457 [2024-10-29 10:56:24.740691] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:19.457 [2024-10-29 10:56:24.741544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72966 ] 00:07:19.457 [2024-10-29 10:56:24.889777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.457 [2024-10-29 10:56:24.910415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.457 [2024-10-29 10:56:24.938572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.716  [2024-10-29T10:56:25.213Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:19.716 00:07:19.716 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:19.716 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:19.716 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:19.716 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:19.716 { 00:07:19.716 "subsystems": [ 00:07:19.716 { 00:07:19.716 "subsystem": "bdev", 00:07:19.716 "config": [ 00:07:19.716 { 00:07:19.716 "params": { 00:07:19.716 "trtype": "pcie", 00:07:19.716 "traddr": "0000:00:10.0", 00:07:19.716 "name": "Nvme0" 00:07:19.716 }, 00:07:19.716 "method": "bdev_nvme_attach_controller" 00:07:19.716 }, 00:07:19.716 { 00:07:19.716 "method": "bdev_wait_for_examine" 00:07:19.716 } 00:07:19.716 ] 00:07:19.716 } 00:07:19.716 ] 00:07:19.716 } 00:07:19.716 [2024-10-29 10:56:25.196051] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:19.716 [2024-10-29 10:56:25.196315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72979 ] 00:07:19.976 [2024-10-29 10:56:25.342437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.976 [2024-10-29 10:56:25.360628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.976 [2024-10-29 10:56:25.388572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.235  [2024-10-29T10:56:25.732Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:20.235 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:20.235 10:56:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:20.235 { 00:07:20.235 "subsystems": [ 00:07:20.235 { 00:07:20.235 "subsystem": "bdev", 00:07:20.235 "config": [ 00:07:20.235 { 00:07:20.235 "params": { 00:07:20.235 "trtype": "pcie", 00:07:20.235 "traddr": "0000:00:10.0", 00:07:20.235 "name": "Nvme0" 00:07:20.235 }, 00:07:20.235 "method": "bdev_nvme_attach_controller" 00:07:20.235 }, 00:07:20.235 { 00:07:20.235 "method": "bdev_wait_for_examine" 00:07:20.235 } 00:07:20.235 ] 00:07:20.235 } 00:07:20.235 ] 00:07:20.235 } 00:07:20.235 [2024-10-29 10:56:25.650064] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:20.235 [2024-10-29 10:56:25.650158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72994 ] 00:07:20.495 [2024-10-29 10:56:25.796949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.495 [2024-10-29 10:56:25.815191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.495 [2024-10-29 10:56:25.843267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.495  [2024-10-29T10:56:26.251Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:20.754 00:07:20.754 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:20.754 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:20.754 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:20.754 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:20.754 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:20.754 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:20.754 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.012 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:21.012 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:21.012 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:21.012 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.271 { 00:07:21.271 "subsystems": [ 00:07:21.271 { 00:07:21.271 "subsystem": "bdev", 00:07:21.271 "config": [ 00:07:21.271 { 00:07:21.271 "params": { 00:07:21.271 "trtype": "pcie", 00:07:21.271 "traddr": "0000:00:10.0", 00:07:21.271 "name": "Nvme0" 00:07:21.271 }, 00:07:21.271 "method": "bdev_nvme_attach_controller" 00:07:21.271 }, 00:07:21.271 { 00:07:21.271 "method": "bdev_wait_for_examine" 00:07:21.271 } 00:07:21.271 ] 00:07:21.271 } 00:07:21.271 ] 00:07:21.271 } 00:07:21.271 [2024-10-29 10:56:26.548022] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:21.271 [2024-10-29 10:56:26.548146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73009 ] 00:07:21.271 [2024-10-29 10:56:26.695687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.271 [2024-10-29 10:56:26.716062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.271 [2024-10-29 10:56:26.748916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.531  [2024-10-29T10:56:27.028Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:21.531 00:07:21.531 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:21.531 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:21.531 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:21.531 10:56:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:21.531 { 00:07:21.531 "subsystems": [ 00:07:21.531 { 00:07:21.531 "subsystem": "bdev", 00:07:21.531 "config": [ 00:07:21.531 { 00:07:21.531 "params": { 00:07:21.531 "trtype": "pcie", 00:07:21.531 "traddr": "0000:00:10.0", 00:07:21.531 "name": "Nvme0" 00:07:21.531 }, 00:07:21.531 "method": "bdev_nvme_attach_controller" 00:07:21.531 }, 00:07:21.531 { 00:07:21.531 "method": "bdev_wait_for_examine" 00:07:21.531 } 00:07:21.531 ] 00:07:21.531 } 00:07:21.531 ] 00:07:21.531 } 00:07:21.531 [2024-10-29 10:56:27.009888] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:21.531 [2024-10-29 10:56:27.009974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73027 ] 00:07:21.790 [2024-10-29 10:56:27.154320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.790 [2024-10-29 10:56:27.175115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.790 [2024-10-29 10:56:27.203361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.049  [2024-10-29T10:56:27.546Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:22.049 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:22.049 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.049 { 00:07:22.049 "subsystems": [ 00:07:22.049 { 00:07:22.049 "subsystem": "bdev", 00:07:22.049 "config": [ 00:07:22.049 { 00:07:22.049 "params": { 00:07:22.049 "trtype": "pcie", 00:07:22.049 "traddr": "0000:00:10.0", 00:07:22.049 "name": "Nvme0" 00:07:22.049 }, 00:07:22.049 "method": "bdev_nvme_attach_controller" 00:07:22.049 }, 00:07:22.049 { 00:07:22.049 "method": "bdev_wait_for_examine" 00:07:22.049 } 00:07:22.049 ] 00:07:22.049 } 00:07:22.049 ] 00:07:22.049 } 00:07:22.049 [2024-10-29 10:56:27.464825] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:22.049 [2024-10-29 10:56:27.464917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73037 ] 00:07:22.308 [2024-10-29 10:56:27.611870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.308 [2024-10-29 10:56:27.630262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.308 [2024-10-29 10:56:27.657914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.308  [2024-10-29T10:56:28.064Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:22.567 00:07:22.567 00:07:22.567 real 0m11.346s 00:07:22.567 user 0m8.359s 00:07:22.567 sys 0m3.546s 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.567 ************************************ 00:07:22.567 END TEST dd_rw 00:07:22.567 ************************************ 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:22.567 ************************************ 00:07:22.567 START TEST dd_rw_offset 00:07:22.567 ************************************ 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=mlyxp35dfbyd0o1lcxturv0sfvv6uvymzdmpdvkl2l69n10pwvlrv38zvvlcymcd7jcn2udicbq1ga5n3gew51r3kf67p5licz1tnxlbfwb4pu2tc18hjeo4sx4dc1n1exrwn0toqya7f87k13ls7lg0c1baua49wxm2w6fgpc1f8bkyl8t86r8l7m62db0qu994ikxowtdgv94y9uqbi0jemaiiiahq434l20ayr7hrqn2k6tbg3k7g3oy1hrtyemw3dgdgzsektusf3no9kdrhpvt56ims4tf3z6cxdpu33huwwusd1unrjp9wzjpbxbmczl7tsphhaftpriizjoqxiu3mhjah5t2ee00jdqprsqkorzmztp9z5c0dm675ghsbodcghcjtombvha6nah224j2nzugetk7xi8rpkvt1q7pbd7fsd6e8avo1jjrdsyt453mxumrbyx10v9ekyftitybrubgnb0ky51qpiy2g67ciehbyjko6faqxkzry9pj5wji3t37tqltmh1jon7wx2rmg9ucquqmr93vvbcrc80zo4m5ls6ir5ntps41evasv2e8vnslqmde8593mo6lkll10ben7jm05s637ki5hxyb3ugrpnz4aejrsjj541fe7smoq5nbiyasl00i37re635acf0ev5zrn7x1lguw88xvsy6iy63bxqkmcal6s9gv6s28vg1m2sok2mazlokj3i00arlonaqckg61rt1m1aq4bv2qpgxsklsguotlscozt8yz59e6cw6vdiwkmodbvrfecv7nbm6d6keuybq7tj1ghwbiu3nqvqopm4i9nujx8nn6gxlxsvgrighzmb2cjqijek4plq16sdiz4d6z3k0jldmwx2atvj4uy9f55zag1vig2j47dnxhufp4i628fyvlftv6v44269auxyssxwja9yrfphds9xtdogwibrz00xowjz9uncap5gj0y97bz3mihp90mvxiiiv2c1wwcxd4f6tph2bqw8y2pamuamykc1mddgfa8keh2jtn7doerwxshs70z0u0chhlw2a49oumy0816qtqtdvvxts2zrmnn0z65kiq2gprsxhqttcny0rtr27dfa6r0wgue766ebzwlr1r0axju2n9ndijuenoylko0regm4qjaklqs9fnv29o5f1h9v0izvfzu7paieli972v4fnkeswwo0pt37wmzkxk9xcdc6mbrv3t7qszafzvj2aps1x5x79hn3l573i76ulajd0uoqwpyyzf7eiss3mlcr1zmtgwdn7mnr2v8ofoqnbiya44oszdl4a2svrpe0lsbe0fnaupkp4m87vqhcoe61gg5vqx3q4ouwwupwn45jgos6dzc20vhgtg0ikqur3uq5xn6bjq4om6p1m9szztanpketi18pl8ek8q7632a26rmfcmjzpsi3gjzozmrmnu9fy984zh4causnydyaw3gjlgg62z990dpnvehno3hum4uhmmk9x8l0s0zmeoituupkjlg2oaj2mb6pp2ywpocgx6fjgm6v9tn5tmc47ijhce3bzh2h76xpv4oqftxs6mec7rg4sgzjkzsts310n8u5mqplix0pigmyt5yq65fmwu6rhyvb6xa4883srhh1h51nivrspbsql40agk5gy0dey8v5nirzbiqabgrsz2urflwidxg9ooeor6t0o57rl4pzt2vmohuik0d6kaj5byum07sd74hmykhayqjg8s9d4y1h4wlo7ofo56h23lfoqwmjifk4ohe8hco734rjtk3iq0x34u9cmd64okc3eilxnly4mh4ktdc6km3od4fyxla66qeyx4ka5870ybipp2qv9f76lm80ivb611ur3rhhf4s5eahx58x26lk5v67h2t0amv674a14ylhcnxi2j9vsm1ez956ikcesilpne2via1mf9zljss1x1a5kw6j4lhd1n00pnn9zxgteqtit80gboyfr2ndr4uaojwsz0rulqo2vtr7n4x84xv1b3np5dki2jdhgn33basez99gjozy6il3m40fw6gxne5n7rzny4hlrehkqxk51k561tviu4eojae0q5p515ilovl4jmld12n7safq7r8wighrfqkqi1enh4cwhdw9f2913q7cgnyawnhxt657jy664kr8vf8dcc7l46awb4iskxqd3zjxo9tcr48gx05tv12w2lfkyl9lwxp8kmpb3yqaa8bup0ed4wtsg68roe12pwpuo1jio4vkcl9bv7lbbdlseeg89izzgy9qgtz2yyxdo4w8325p5ru8qhdv4eseee0ypzi475hcs8mqnudtcuseseios9uf0v2qho5lvpcd1udlhrlbrhp2ssakzisadwbndznwduojrgxm1b7i63ptpz9jg00ta5jjd21c31m6fvfn9o5h7yvoop50siiil7hwh6om8p2y95fdp6c40as7dmlkggxiypx6yhiyuceqtuotin4d4ja4do644hrc42rbbtu5ok8sewsg96hcokti50kdtl1cnxws2xuhccp43g8bbs74col1ui7hbh5om1w47l810ug8ox3wgc6i1n0z8gedh03qs7fd9kt69p0x8mmho0h3zs23b64eez48ci8qc9i6tyjd06d3bl2uurhr2tul9zkcvegfd2p1z6pwwkz1rx8tnmav2393wo9ew4qoutbhmxolrjcd7i3phwdkr6gghjjr3iwj8vedawm4cwhk6yk4b9noewwqfyi837b697o07e3ds82djep2lxjyhkc1v21v4cdu9io0m2osw1ea8iyhs4vj3x3p3rkrza1tnv76br73l0igxc6qsv0wd69ickj8zgvi7eq87iadu9cuc5u9k4mqp21cv7450dpswsgihcal3qq883ftayencamc7pnexhermq8nk1ayb83sug1hqwc0wly0kakfdzpjuh9wmwd8k53lmofw9pwnv1elny0861iz7vy8ij8jc15elmyeowsmbvo4u0fqthmbxndrvi165238mpnd3033ep8e0artv3ozgioh33qbcu953t0waamsqjhn6ouqcacsbqdfgp1yelscp57am30zjux9sypwkz6zmawb8q050apdlb97byjpn4atpiquzs6q6f88doj6114kgnktjo3rvgwy8excc4ic9he35rpu8mwhl7j6vgs47qa15uk05wrma4xas0e50hlavjngmcgekhw5x95ct6wosoc1lcx8j1wkee1i9eku557j4hchti62j9wu16xw46gklpgyi3fi9bwwbp6h95kot5fnqvlkk40m78qv195vf5z9mxmoib7u93dxjs5i71qr23u7pp54id2vhde7k4vgepz6doyg50pnht1gwquzoj3th7ynumsndhle2jg1x1l6cqdoq2ixe6686v2g5vbicqibgxjfnv5k2w0ixa3hjuhnsyptcxi2peysawqlerbag7ek6v75tnkdbbn45m37ye7q16woddll9s7ikoafrkng1vpmzw2tvtcapyws4okxoy9x7wofsn0wynknsepbvq9e04kn69krq4m5eg0xy86ei200oyof5ifbw39b3o93zzd6zzyajkg8c4be6jghcele1t8dkfq9brd22zqzmxrfgwv0ouvo0ev045oirrp97k1jkd46yvfawtxolsmlcwn90mm4rujn2zz6hf3jipocap2rtjw4pwlfctank5n6eq05zj838d179w1ok2zi1fdhvijo85dqez4k93tfgh974epotm0kurfxbew2a8feg5sue2qdsyq3ygkl5tx7f2zjotcsftc69q8kstjwpywdy7ybvitaptsjdmn3l3kawmbshjuhyzkwx0efsa3dvm97uowgido9sps8nf0pn2vp2f203nfi90zj37p4macprbzy9gkwkaqvzu3rquxxf50a58ppqolyp03oq8bd8w8do7tvz7an4txwdwzvil6jd7svvita7yuklrojquu92ol317gnq8x66ptuxi48dfgat6cntcwhl79a5k2p0efeq01fpbtpx8ra3pr3yjfs03mub8pg 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:22.567 10:56:27 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:22.567 { 00:07:22.567 "subsystems": [ 00:07:22.567 { 00:07:22.567 "subsystem": "bdev", 00:07:22.567 "config": [ 00:07:22.567 { 00:07:22.567 "params": { 00:07:22.567 "trtype": "pcie", 00:07:22.567 "traddr": "0000:00:10.0", 00:07:22.567 "name": "Nvme0" 00:07:22.567 }, 00:07:22.567 "method": "bdev_nvme_attach_controller" 00:07:22.567 }, 00:07:22.567 { 00:07:22.567 "method": "bdev_wait_for_examine" 00:07:22.567 } 00:07:22.567 ] 00:07:22.567 } 00:07:22.567 ] 00:07:22.567 } 00:07:22.567 [2024-10-29 10:56:28.022233] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:22.567 [2024-10-29 10:56:28.022326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73073 ] 00:07:22.828 [2024-10-29 10:56:28.170385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.828 [2024-10-29 10:56:28.189035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.828 [2024-10-29 10:56:28.216957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.828  [2024-10-29T10:56:28.584Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:23.087 00:07:23.087 10:56:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:23.087 10:56:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:23.087 10:56:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:23.087 10:56:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:23.087 [2024-10-29 10:56:28.474165] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:23.087 [2024-10-29 10:56:28.474434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73081 ] 00:07:23.087 { 00:07:23.087 "subsystems": [ 00:07:23.087 { 00:07:23.087 "subsystem": "bdev", 00:07:23.087 "config": [ 00:07:23.087 { 00:07:23.087 "params": { 00:07:23.087 "trtype": "pcie", 00:07:23.087 "traddr": "0000:00:10.0", 00:07:23.087 "name": "Nvme0" 00:07:23.087 }, 00:07:23.087 "method": "bdev_nvme_attach_controller" 00:07:23.087 }, 00:07:23.087 { 00:07:23.087 "method": "bdev_wait_for_examine" 00:07:23.087 } 00:07:23.087 ] 00:07:23.087 } 00:07:23.087 ] 00:07:23.087 } 00:07:23.345 [2024-10-29 10:56:28.620338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.345 [2024-10-29 10:56:28.638820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.345 [2024-10-29 10:56:28.666698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.345  [2024-10-29T10:56:29.102Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:23.605 00:07:23.605 10:56:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:23.605 ************************************ 00:07:23.605 END TEST dd_rw_offset 00:07:23.605 ************************************ 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ mlyxp35dfbyd0o1lcxturv0sfvv6uvymzdmpdvkl2l69n10pwvlrv38zvvlcymcd7jcn2udicbq1ga5n3gew51r3kf67p5licz1tnxlbfwb4pu2tc18hjeo4sx4dc1n1exrwn0toqya7f87k13ls7lg0c1baua49wxm2w6fgpc1f8bkyl8t86r8l7m62db0qu994ikxowtdgv94y9uqbi0jemaiiiahq434l20ayr7hrqn2k6tbg3k7g3oy1hrtyemw3dgdgzsektusf3no9kdrhpvt56ims4tf3z6cxdpu33huwwusd1unrjp9wzjpbxbmczl7tsphhaftpriizjoqxiu3mhjah5t2ee00jdqprsqkorzmztp9z5c0dm675ghsbodcghcjtombvha6nah224j2nzugetk7xi8rpkvt1q7pbd7fsd6e8avo1jjrdsyt453mxumrbyx10v9ekyftitybrubgnb0ky51qpiy2g67ciehbyjko6faqxkzry9pj5wji3t37tqltmh1jon7wx2rmg9ucquqmr93vvbcrc80zo4m5ls6ir5ntps41evasv2e8vnslqmde8593mo6lkll10ben7jm05s637ki5hxyb3ugrpnz4aejrsjj541fe7smoq5nbiyasl00i37re635acf0ev5zrn7x1lguw88xvsy6iy63bxqkmcal6s9gv6s28vg1m2sok2mazlokj3i00arlonaqckg61rt1m1aq4bv2qpgxsklsguotlscozt8yz59e6cw6vdiwkmodbvrfecv7nbm6d6keuybq7tj1ghwbiu3nqvqopm4i9nujx8nn6gxlxsvgrighzmb2cjqijek4plq16sdiz4d6z3k0jldmwx2atvj4uy9f55zag1vig2j47dnxhufp4i628fyvlftv6v44269auxyssxwja9yrfphds9xtdogwibrz00xowjz9uncap5gj0y97bz3mihp90mvxiiiv2c1wwcxd4f6tph2bqw8y2pamuamykc1mddgfa8keh2jtn7doerwxshs70z0u0chhlw2a49oumy0816qtqtdvvxts2zrmnn0z65kiq2gprsxhqttcny0rtr27dfa6r0wgue766ebzwlr1r0axju2n9ndijuenoylko0regm4qjaklqs9fnv29o5f1h9v0izvfzu7paieli972v4fnkeswwo0pt37wmzkxk9xcdc6mbrv3t7qszafzvj2aps1x5x79hn3l573i76ulajd0uoqwpyyzf7eiss3mlcr1zmtgwdn7mnr2v8ofoqnbiya44oszdl4a2svrpe0lsbe0fnaupkp4m87vqhcoe61gg5vqx3q4ouwwupwn45jgos6dzc20vhgtg0ikqur3uq5xn6bjq4om6p1m9szztanpketi18pl8ek8q7632a26rmfcmjzpsi3gjzozmrmnu9fy984zh4causnydyaw3gjlgg62z990dpnvehno3hum4uhmmk9x8l0s0zmeoituupkjlg2oaj2mb6pp2ywpocgx6fjgm6v9tn5tmc47ijhce3bzh2h76xpv4oqftxs6mec7rg4sgzjkzsts310n8u5mqplix0pigmyt5yq65fmwu6rhyvb6xa4883srhh1h51nivrspbsql40agk5gy0dey8v5nirzbiqabgrsz2urflwidxg9ooeor6t0o57rl4pzt2vmohuik0d6kaj5byum07sd74hmykhayqjg8s9d4y1h4wlo7ofo56h23lfoqwmjifk4ohe8hco734rjtk3iq0x34u9cmd64okc3eilxnly4mh4ktdc6km3od4fyxla66qeyx4ka5870ybipp2qv9f76lm80ivb611ur3rhhf4s5eahx58x26lk5v67h2t0amv674a14ylhcnxi2j9vsm1ez956ikcesilpne2via1mf9zljss1x1a5kw6j4lhd1n00pnn9zxgteqtit80gboyfr2ndr4uaojwsz0rulqo2vtr7n4x84xv1b3np5dki2jdhgn33basez99gjozy6il3m40fw6gxne5n7rzny4hlrehkqxk51k561tviu4eojae0q5p515ilovl4jmld12n7safq7r8wighrfqkqi1enh4cwhdw9f2913q7cgnyawnhxt657jy664kr8vf8dcc7l46awb4iskxqd3zjxo9tcr48gx05tv12w2lfkyl9lwxp8kmpb3yqaa8bup0ed4wtsg68roe12pwpuo1jio4vkcl9bv7lbbdlseeg89izzgy9qgtz2yyxdo4w8325p5ru8qhdv4eseee0ypzi475hcs8mqnudtcuseseios9uf0v2qho5lvpcd1udlhrlbrhp2ssakzisadwbndznwduojrgxm1b7i63ptpz9jg00ta5jjd21c31m6fvfn9o5h7yvoop50siiil7hwh6om8p2y95fdp6c40as7dmlkggxiypx6yhiyuceqtuotin4d4ja4do644hrc42rbbtu5ok8sewsg96hcokti50kdtl1cnxws2xuhccp43g8bbs74col1ui7hbh5om1w47l810ug8ox3wgc6i1n0z8gedh03qs7fd9kt69p0x8mmho0h3zs23b64eez48ci8qc9i6tyjd06d3bl2uurhr2tul9zkcvegfd2p1z6pwwkz1rx8tnmav2393wo9ew4qoutbhmxolrjcd7i3phwdkr6gghjjr3iwj8vedawm4cwhk6yk4b9noewwqfyi837b697o07e3ds82djep2lxjyhkc1v21v4cdu9io0m2osw1ea8iyhs4vj3x3p3rkrza1tnv76br73l0igxc6qsv0wd69ickj8zgvi7eq87iadu9cuc5u9k4mqp21cv7450dpswsgihcal3qq883ftayencamc7pnexhermq8nk1ayb83sug1hqwc0wly0kakfdzpjuh9wmwd8k53lmofw9pwnv1elny0861iz7vy8ij8jc15elmyeowsmbvo4u0fqthmbxndrvi165238mpnd3033ep8e0artv3ozgioh33qbcu953t0waamsqjhn6ouqcacsbqdfgp1yelscp57am30zjux9sypwkz6zmawb8q050apdlb97byjpn4atpiquzs6q6f88doj6114kgnktjo3rvgwy8excc4ic9he35rpu8mwhl7j6vgs47qa15uk05wrma4xas0e50hlavjngmcgekhw5x95ct6wosoc1lcx8j1wkee1i9eku557j4hchti62j9wu16xw46gklpgyi3fi9bwwbp6h95kot5fnqvlkk40m78qv195vf5z9mxmoib7u93dxjs5i71qr23u7pp54id2vhde7k4vgepz6doyg50pnht1gwquzoj3th7ynumsndhle2jg1x1l6cqdoq2ixe6686v2g5vbicqibgxjfnv5k2w0ixa3hjuhnsyptcxi2peysawqlerbag7ek6v75tnkdbbn45m37ye7q16woddll9s7ikoafrkng1vpmzw2tvtcapyws4okxoy9x7wofsn0wynknsepbvq9e04kn69krq4m5eg0xy86ei200oyof5ifbw39b3o93zzd6zzyajkg8c4be6jghcele1t8dkfq9brd22zqzmxrfgwv0ouvo0ev045oirrp97k1jkd46yvfawtxolsmlcwn90mm4rujn2zz6hf3jipocap2rtjw4pwlfctank5n6eq05zj838d179w1ok2zi1fdhvijo85dqez4k93tfgh974epotm0kurfxbew2a8feg5sue2qdsyq3ygkl5tx7f2zjotcsftc69q8kstjwpywdy7ybvitaptsjdmn3l3kawmbshjuhyzkwx0efsa3dvm97uowgido9sps8nf0pn2vp2f203nfi90zj37p4macprbzy9gkwkaqvzu3rquxxf50a58ppqolyp03oq8bd8w8do7tvz7an4txwdwzvil6jd7svvita7yuklrojquu92ol317gnq8x66ptuxi48dfgat6cntcwhl79a5k2p0efeq01fpbtpx8ra3pr3yjfs03mub8pg == \m\l\y\x\p\3\5\d\f\b\y\d\0\o\1\l\c\x\t\u\r\v\0\s\f\v\v\6\u\v\y\m\z\d\m\p\d\v\k\l\2\l\6\9\n\1\0\p\w\v\l\r\v\3\8\z\v\v\l\c\y\m\c\d\7\j\c\n\2\u\d\i\c\b\q\1\g\a\5\n\3\g\e\w\5\1\r\3\k\f\6\7\p\5\l\i\c\z\1\t\n\x\l\b\f\w\b\4\p\u\2\t\c\1\8\h\j\e\o\4\s\x\4\d\c\1\n\1\e\x\r\w\n\0\t\o\q\y\a\7\f\8\7\k\1\3\l\s\7\l\g\0\c\1\b\a\u\a\4\9\w\x\m\2\w\6\f\g\p\c\1\f\8\b\k\y\l\8\t\8\6\r\8\l\7\m\6\2\d\b\0\q\u\9\9\4\i\k\x\o\w\t\d\g\v\9\4\y\9\u\q\b\i\0\j\e\m\a\i\i\i\a\h\q\4\3\4\l\2\0\a\y\r\7\h\r\q\n\2\k\6\t\b\g\3\k\7\g\3\o\y\1\h\r\t\y\e\m\w\3\d\g\d\g\z\s\e\k\t\u\s\f\3\n\o\9\k\d\r\h\p\v\t\5\6\i\m\s\4\t\f\3\z\6\c\x\d\p\u\3\3\h\u\w\w\u\s\d\1\u\n\r\j\p\9\w\z\j\p\b\x\b\m\c\z\l\7\t\s\p\h\h\a\f\t\p\r\i\i\z\j\o\q\x\i\u\3\m\h\j\a\h\5\t\2\e\e\0\0\j\d\q\p\r\s\q\k\o\r\z\m\z\t\p\9\z\5\c\0\d\m\6\7\5\g\h\s\b\o\d\c\g\h\c\j\t\o\m\b\v\h\a\6\n\a\h\2\2\4\j\2\n\z\u\g\e\t\k\7\x\i\8\r\p\k\v\t\1\q\7\p\b\d\7\f\s\d\6\e\8\a\v\o\1\j\j\r\d\s\y\t\4\5\3\m\x\u\m\r\b\y\x\1\0\v\9\e\k\y\f\t\i\t\y\b\r\u\b\g\n\b\0\k\y\5\1\q\p\i\y\2\g\6\7\c\i\e\h\b\y\j\k\o\6\f\a\q\x\k\z\r\y\9\p\j\5\w\j\i\3\t\3\7\t\q\l\t\m\h\1\j\o\n\7\w\x\2\r\m\g\9\u\c\q\u\q\m\r\9\3\v\v\b\c\r\c\8\0\z\o\4\m\5\l\s\6\i\r\5\n\t\p\s\4\1\e\v\a\s\v\2\e\8\v\n\s\l\q\m\d\e\8\5\9\3\m\o\6\l\k\l\l\1\0\b\e\n\7\j\m\0\5\s\6\3\7\k\i\5\h\x\y\b\3\u\g\r\p\n\z\4\a\e\j\r\s\j\j\5\4\1\f\e\7\s\m\o\q\5\n\b\i\y\a\s\l\0\0\i\3\7\r\e\6\3\5\a\c\f\0\e\v\5\z\r\n\7\x\1\l\g\u\w\8\8\x\v\s\y\6\i\y\6\3\b\x\q\k\m\c\a\l\6\s\9\g\v\6\s\2\8\v\g\1\m\2\s\o\k\2\m\a\z\l\o\k\j\3\i\0\0\a\r\l\o\n\a\q\c\k\g\6\1\r\t\1\m\1\a\q\4\b\v\2\q\p\g\x\s\k\l\s\g\u\o\t\l\s\c\o\z\t\8\y\z\5\9\e\6\c\w\6\v\d\i\w\k\m\o\d\b\v\r\f\e\c\v\7\n\b\m\6\d\6\k\e\u\y\b\q\7\t\j\1\g\h\w\b\i\u\3\n\q\v\q\o\p\m\4\i\9\n\u\j\x\8\n\n\6\g\x\l\x\s\v\g\r\i\g\h\z\m\b\2\c\j\q\i\j\e\k\4\p\l\q\1\6\s\d\i\z\4\d\6\z\3\k\0\j\l\d\m\w\x\2\a\t\v\j\4\u\y\9\f\5\5\z\a\g\1\v\i\g\2\j\4\7\d\n\x\h\u\f\p\4\i\6\2\8\f\y\v\l\f\t\v\6\v\4\4\2\6\9\a\u\x\y\s\s\x\w\j\a\9\y\r\f\p\h\d\s\9\x\t\d\o\g\w\i\b\r\z\0\0\x\o\w\j\z\9\u\n\c\a\p\5\g\j\0\y\9\7\b\z\3\m\i\h\p\9\0\m\v\x\i\i\i\v\2\c\1\w\w\c\x\d\4\f\6\t\p\h\2\b\q\w\8\y\2\p\a\m\u\a\m\y\k\c\1\m\d\d\g\f\a\8\k\e\h\2\j\t\n\7\d\o\e\r\w\x\s\h\s\7\0\z\0\u\0\c\h\h\l\w\2\a\4\9\o\u\m\y\0\8\1\6\q\t\q\t\d\v\v\x\t\s\2\z\r\m\n\n\0\z\6\5\k\i\q\2\g\p\r\s\x\h\q\t\t\c\n\y\0\r\t\r\2\7\d\f\a\6\r\0\w\g\u\e\7\6\6\e\b\z\w\l\r\1\r\0\a\x\j\u\2\n\9\n\d\i\j\u\e\n\o\y\l\k\o\0\r\e\g\m\4\q\j\a\k\l\q\s\9\f\n\v\2\9\o\5\f\1\h\9\v\0\i\z\v\f\z\u\7\p\a\i\e\l\i\9\7\2\v\4\f\n\k\e\s\w\w\o\0\p\t\3\7\w\m\z\k\x\k\9\x\c\d\c\6\m\b\r\v\3\t\7\q\s\z\a\f\z\v\j\2\a\p\s\1\x\5\x\7\9\h\n\3\l\5\7\3\i\7\6\u\l\a\j\d\0\u\o\q\w\p\y\y\z\f\7\e\i\s\s\3\m\l\c\r\1\z\m\t\g\w\d\n\7\m\n\r\2\v\8\o\f\o\q\n\b\i\y\a\4\4\o\s\z\d\l\4\a\2\s\v\r\p\e\0\l\s\b\e\0\f\n\a\u\p\k\p\4\m\8\7\v\q\h\c\o\e\6\1\g\g\5\v\q\x\3\q\4\o\u\w\w\u\p\w\n\4\5\j\g\o\s\6\d\z\c\2\0\v\h\g\t\g\0\i\k\q\u\r\3\u\q\5\x\n\6\b\j\q\4\o\m\6\p\1\m\9\s\z\z\t\a\n\p\k\e\t\i\1\8\p\l\8\e\k\8\q\7\6\3\2\a\2\6\r\m\f\c\m\j\z\p\s\i\3\g\j\z\o\z\m\r\m\n\u\9\f\y\9\8\4\z\h\4\c\a\u\s\n\y\d\y\a\w\3\g\j\l\g\g\6\2\z\9\9\0\d\p\n\v\e\h\n\o\3\h\u\m\4\u\h\m\m\k\9\x\8\l\0\s\0\z\m\e\o\i\t\u\u\p\k\j\l\g\2\o\a\j\2\m\b\6\p\p\2\y\w\p\o\c\g\x\6\f\j\g\m\6\v\9\t\n\5\t\m\c\4\7\i\j\h\c\e\3\b\z\h\2\h\7\6\x\p\v\4\o\q\f\t\x\s\6\m\e\c\7\r\g\4\s\g\z\j\k\z\s\t\s\3\1\0\n\8\u\5\m\q\p\l\i\x\0\p\i\g\m\y\t\5\y\q\6\5\f\m\w\u\6\r\h\y\v\b\6\x\a\4\8\8\3\s\r\h\h\1\h\5\1\n\i\v\r\s\p\b\s\q\l\4\0\a\g\k\5\g\y\0\d\e\y\8\v\5\n\i\r\z\b\i\q\a\b\g\r\s\z\2\u\r\f\l\w\i\d\x\g\9\o\o\e\o\r\6\t\0\o\5\7\r\l\4\p\z\t\2\v\m\o\h\u\i\k\0\d\6\k\a\j\5\b\y\u\m\0\7\s\d\7\4\h\m\y\k\h\a\y\q\j\g\8\s\9\d\4\y\1\h\4\w\l\o\7\o\f\o\5\6\h\2\3\l\f\o\q\w\m\j\i\f\k\4\o\h\e\8\h\c\o\7\3\4\r\j\t\k\3\i\q\0\x\3\4\u\9\c\m\d\6\4\o\k\c\3\e\i\l\x\n\l\y\4\m\h\4\k\t\d\c\6\k\m\3\o\d\4\f\y\x\l\a\6\6\q\e\y\x\4\k\a\5\8\7\0\y\b\i\p\p\2\q\v\9\f\7\6\l\m\8\0\i\v\b\6\1\1\u\r\3\r\h\h\f\4\s\5\e\a\h\x\5\8\x\2\6\l\k\5\v\6\7\h\2\t\0\a\m\v\6\7\4\a\1\4\y\l\h\c\n\x\i\2\j\9\v\s\m\1\e\z\9\5\6\i\k\c\e\s\i\l\p\n\e\2\v\i\a\1\m\f\9\z\l\j\s\s\1\x\1\a\5\k\w\6\j\4\l\h\d\1\n\0\0\p\n\n\9\z\x\g\t\e\q\t\i\t\8\0\g\b\o\y\f\r\2\n\d\r\4\u\a\o\j\w\s\z\0\r\u\l\q\o\2\v\t\r\7\n\4\x\8\4\x\v\1\b\3\n\p\5\d\k\i\2\j\d\h\g\n\3\3\b\a\s\e\z\9\9\g\j\o\z\y\6\i\l\3\m\4\0\f\w\6\g\x\n\e\5\n\7\r\z\n\y\4\h\l\r\e\h\k\q\x\k\5\1\k\5\6\1\t\v\i\u\4\e\o\j\a\e\0\q\5\p\5\1\5\i\l\o\v\l\4\j\m\l\d\1\2\n\7\s\a\f\q\7\r\8\w\i\g\h\r\f\q\k\q\i\1\e\n\h\4\c\w\h\d\w\9\f\2\9\1\3\q\7\c\g\n\y\a\w\n\h\x\t\6\5\7\j\y\6\6\4\k\r\8\v\f\8\d\c\c\7\l\4\6\a\w\b\4\i\s\k\x\q\d\3\z\j\x\o\9\t\c\r\4\8\g\x\0\5\t\v\1\2\w\2\l\f\k\y\l\9\l\w\x\p\8\k\m\p\b\3\y\q\a\a\8\b\u\p\0\e\d\4\w\t\s\g\6\8\r\o\e\1\2\p\w\p\u\o\1\j\i\o\4\v\k\c\l\9\b\v\7\l\b\b\d\l\s\e\e\g\8\9\i\z\z\g\y\9\q\g\t\z\2\y\y\x\d\o\4\w\8\3\2\5\p\5\r\u\8\q\h\d\v\4\e\s\e\e\e\0\y\p\z\i\4\7\5\h\c\s\8\m\q\n\u\d\t\c\u\s\e\s\e\i\o\s\9\u\f\0\v\2\q\h\o\5\l\v\p\c\d\1\u\d\l\h\r\l\b\r\h\p\2\s\s\a\k\z\i\s\a\d\w\b\n\d\z\n\w\d\u\o\j\r\g\x\m\1\b\7\i\6\3\p\t\p\z\9\j\g\0\0\t\a\5\j\j\d\2\1\c\3\1\m\6\f\v\f\n\9\o\5\h\7\y\v\o\o\p\5\0\s\i\i\i\l\7\h\w\h\6\o\m\8\p\2\y\9\5\f\d\p\6\c\4\0\a\s\7\d\m\l\k\g\g\x\i\y\p\x\6\y\h\i\y\u\c\e\q\t\u\o\t\i\n\4\d\4\j\a\4\d\o\6\4\4\h\r\c\4\2\r\b\b\t\u\5\o\k\8\s\e\w\s\g\9\6\h\c\o\k\t\i\5\0\k\d\t\l\1\c\n\x\w\s\2\x\u\h\c\c\p\4\3\g\8\b\b\s\7\4\c\o\l\1\u\i\7\h\b\h\5\o\m\1\w\4\7\l\8\1\0\u\g\8\o\x\3\w\g\c\6\i\1\n\0\z\8\g\e\d\h\0\3\q\s\7\f\d\9\k\t\6\9\p\0\x\8\m\m\h\o\0\h\3\z\s\2\3\b\6\4\e\e\z\4\8\c\i\8\q\c\9\i\6\t\y\j\d\0\6\d\3\b\l\2\u\u\r\h\r\2\t\u\l\9\z\k\c\v\e\g\f\d\2\p\1\z\6\p\w\w\k\z\1\r\x\8\t\n\m\a\v\2\3\9\3\w\o\9\e\w\4\q\o\u\t\b\h\m\x\o\l\r\j\c\d\7\i\3\p\h\w\d\k\r\6\g\g\h\j\j\r\3\i\w\j\8\v\e\d\a\w\m\4\c\w\h\k\6\y\k\4\b\9\n\o\e\w\w\q\f\y\i\8\3\7\b\6\9\7\o\0\7\e\3\d\s\8\2\d\j\e\p\2\l\x\j\y\h\k\c\1\v\2\1\v\4\c\d\u\9\i\o\0\m\2\o\s\w\1\e\a\8\i\y\h\s\4\v\j\3\x\3\p\3\r\k\r\z\a\1\t\n\v\7\6\b\r\7\3\l\0\i\g\x\c\6\q\s\v\0\w\d\6\9\i\c\k\j\8\z\g\v\i\7\e\q\8\7\i\a\d\u\9\c\u\c\5\u\9\k\4\m\q\p\2\1\c\v\7\4\5\0\d\p\s\w\s\g\i\h\c\a\l\3\q\q\8\8\3\f\t\a\y\e\n\c\a\m\c\7\p\n\e\x\h\e\r\m\q\8\n\k\1\a\y\b\8\3\s\u\g\1\h\q\w\c\0\w\l\y\0\k\a\k\f\d\z\p\j\u\h\9\w\m\w\d\8\k\5\3\l\m\o\f\w\9\p\w\n\v\1\e\l\n\y\0\8\6\1\i\z\7\v\y\8\i\j\8\j\c\1\5\e\l\m\y\e\o\w\s\m\b\v\o\4\u\0\f\q\t\h\m\b\x\n\d\r\v\i\1\6\5\2\3\8\m\p\n\d\3\0\3\3\e\p\8\e\0\a\r\t\v\3\o\z\g\i\o\h\3\3\q\b\c\u\9\5\3\t\0\w\a\a\m\s\q\j\h\n\6\o\u\q\c\a\c\s\b\q\d\f\g\p\1\y\e\l\s\c\p\5\7\a\m\3\0\z\j\u\x\9\s\y\p\w\k\z\6\z\m\a\w\b\8\q\0\5\0\a\p\d\l\b\9\7\b\y\j\p\n\4\a\t\p\i\q\u\z\s\6\q\6\f\8\8\d\o\j\6\1\1\4\k\g\n\k\t\j\o\3\r\v\g\w\y\8\e\x\c\c\4\i\c\9\h\e\3\5\r\p\u\8\m\w\h\l\7\j\6\v\g\s\4\7\q\a\1\5\u\k\0\5\w\r\m\a\4\x\a\s\0\e\5\0\h\l\a\v\j\n\g\m\c\g\e\k\h\w\5\x\9\5\c\t\6\w\o\s\o\c\1\l\c\x\8\j\1\w\k\e\e\1\i\9\e\k\u\5\5\7\j\4\h\c\h\t\i\6\2\j\9\w\u\1\6\x\w\4\6\g\k\l\p\g\y\i\3\f\i\9\b\w\w\b\p\6\h\9\5\k\o\t\5\f\n\q\v\l\k\k\4\0\m\7\8\q\v\1\9\5\v\f\5\z\9\m\x\m\o\i\b\7\u\9\3\d\x\j\s\5\i\7\1\q\r\2\3\u\7\p\p\5\4\i\d\2\v\h\d\e\7\k\4\v\g\e\p\z\6\d\o\y\g\5\0\p\n\h\t\1\g\w\q\u\z\o\j\3\t\h\7\y\n\u\m\s\n\d\h\l\e\2\j\g\1\x\1\l\6\c\q\d\o\q\2\i\x\e\6\6\8\6\v\2\g\5\v\b\i\c\q\i\b\g\x\j\f\n\v\5\k\2\w\0\i\x\a\3\h\j\u\h\n\s\y\p\t\c\x\i\2\p\e\y\s\a\w\q\l\e\r\b\a\g\7\e\k\6\v\7\5\t\n\k\d\b\b\n\4\5\m\3\7\y\e\7\q\1\6\w\o\d\d\l\l\9\s\7\i\k\o\a\f\r\k\n\g\1\v\p\m\z\w\2\t\v\t\c\a\p\y\w\s\4\o\k\x\o\y\9\x\7\w\o\f\s\n\0\w\y\n\k\n\s\e\p\b\v\q\9\e\0\4\k\n\6\9\k\r\q\4\m\5\e\g\0\x\y\8\6\e\i\2\0\0\o\y\o\f\5\i\f\b\w\3\9\b\3\o\9\3\z\z\d\6\z\z\y\a\j\k\g\8\c\4\b\e\6\j\g\h\c\e\l\e\1\t\8\d\k\f\q\9\b\r\d\2\2\z\q\z\m\x\r\f\g\w\v\0\o\u\v\o\0\e\v\0\4\5\o\i\r\r\p\9\7\k\1\j\k\d\4\6\y\v\f\a\w\t\x\o\l\s\m\l\c\w\n\9\0\m\m\4\r\u\j\n\2\z\z\6\h\f\3\j\i\p\o\c\a\p\2\r\t\j\w\4\p\w\l\f\c\t\a\n\k\5\n\6\e\q\0\5\z\j\8\3\8\d\1\7\9\w\1\o\k\2\z\i\1\f\d\h\v\i\j\o\8\5\d\q\e\z\4\k\9\3\t\f\g\h\9\7\4\e\p\o\t\m\0\k\u\r\f\x\b\e\w\2\a\8\f\e\g\5\s\u\e\2\q\d\s\y\q\3\y\g\k\l\5\t\x\7\f\2\z\j\o\t\c\s\f\t\c\6\9\q\8\k\s\t\j\w\p\y\w\d\y\7\y\b\v\i\t\a\p\t\s\j\d\m\n\3\l\3\k\a\w\m\b\s\h\j\u\h\y\z\k\w\x\0\e\f\s\a\3\d\v\m\9\7\u\o\w\g\i\d\o\9\s\p\s\8\n\f\0\p\n\2\v\p\2\f\2\0\3\n\f\i\9\0\z\j\3\7\p\4\m\a\c\p\r\b\z\y\9\g\k\w\k\a\q\v\z\u\3\r\q\u\x\x\f\5\0\a\5\8\p\p\q\o\l\y\p\0\3\o\q\8\b\d\8\w\8\d\o\7\t\v\z\7\a\n\4\t\x\w\d\w\z\v\i\l\6\j\d\7\s\v\v\i\t\a\7\y\u\k\l\r\o\j\q\u\u\9\2\o\l\3\1\7\g\n\q\8\x\6\6\p\t\u\x\i\4\8\d\f\g\a\t\6\c\n\t\c\w\h\l\7\9\a\5\k\2\p\0\e\f\e\q\0\1\f\p\b\t\p\x\8\r\a\3\p\r\3\y\j\f\s\0\3\m\u\b\8\p\g ]] 00:07:23.606 00:07:23.606 real 0m0.955s 00:07:23.606 user 0m0.643s 00:07:23.606 sys 0m0.389s 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:23.606 10:56:28 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:23.606 [2024-10-29 10:56:28.962912] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:23.606 [2024-10-29 10:56:28.962996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73116 ] 00:07:23.606 { 00:07:23.606 "subsystems": [ 00:07:23.606 { 00:07:23.606 "subsystem": "bdev", 00:07:23.606 "config": [ 00:07:23.606 { 00:07:23.606 "params": { 00:07:23.606 "trtype": "pcie", 00:07:23.606 "traddr": "0000:00:10.0", 00:07:23.606 "name": "Nvme0" 00:07:23.606 }, 00:07:23.606 "method": "bdev_nvme_attach_controller" 00:07:23.606 }, 00:07:23.606 { 00:07:23.606 "method": "bdev_wait_for_examine" 00:07:23.606 } 00:07:23.606 ] 00:07:23.606 } 00:07:23.606 ] 00:07:23.606 } 00:07:23.606 [2024-10-29 10:56:29.100568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.866 [2024-10-29 10:56:29.119825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.866 [2024-10-29 10:56:29.147724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.866  [2024-10-29T10:56:29.363Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:23.866 00:07:23.866 10:56:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.126 ************************************ 00:07:24.126 END TEST spdk_dd_basic_rw 00:07:24.126 ************************************ 00:07:24.126 00:07:24.126 real 0m13.806s 00:07:24.126 user 0m9.861s 00:07:24.126 sys 0m4.460s 00:07:24.126 10:56:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.126 10:56:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:24.126 10:56:29 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:24.126 10:56:29 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:24.126 10:56:29 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.126 10:56:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:24.126 ************************************ 00:07:24.126 START TEST spdk_dd_posix 00:07:24.126 ************************************ 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:24.126 * Looking for test storage... 00:07:24.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.126 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:24.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.127 --rc genhtml_branch_coverage=1 00:07:24.127 --rc genhtml_function_coverage=1 00:07:24.127 --rc genhtml_legend=1 00:07:24.127 --rc geninfo_all_blocks=1 00:07:24.127 --rc geninfo_unexecuted_blocks=1 00:07:24.127 00:07:24.127 ' 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:24.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.127 --rc genhtml_branch_coverage=1 00:07:24.127 --rc genhtml_function_coverage=1 00:07:24.127 --rc genhtml_legend=1 00:07:24.127 --rc geninfo_all_blocks=1 00:07:24.127 --rc geninfo_unexecuted_blocks=1 00:07:24.127 00:07:24.127 ' 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:24.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.127 --rc genhtml_branch_coverage=1 00:07:24.127 --rc genhtml_function_coverage=1 00:07:24.127 --rc genhtml_legend=1 00:07:24.127 --rc geninfo_all_blocks=1 00:07:24.127 --rc geninfo_unexecuted_blocks=1 00:07:24.127 00:07:24.127 ' 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:24.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.127 --rc genhtml_branch_coverage=1 00:07:24.127 --rc genhtml_function_coverage=1 00:07:24.127 --rc genhtml_legend=1 00:07:24.127 --rc geninfo_all_blocks=1 00:07:24.127 --rc geninfo_unexecuted_blocks=1 00:07:24.127 00:07:24.127 ' 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:24.127 * First test run, liburing in use 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:24.127 ************************************ 00:07:24.127 START TEST dd_flag_append 00:07:24.127 ************************************ 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=c96goczksx820vq0sr6eih6z049b3hen 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=yuenozpd5bx41ypikbz5bxpyo67lcuj7 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s c96goczksx820vq0sr6eih6z049b3hen 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s yuenozpd5bx41ypikbz5bxpyo67lcuj7 00:07:24.127 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:24.388 [2024-10-29 10:56:29.669000] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:24.388 [2024-10-29 10:56:29.669085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73177 ] 00:07:24.388 [2024-10-29 10:56:29.818984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.388 [2024-10-29 10:56:29.838717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.388 [2024-10-29 10:56:29.866426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.388  [2024-10-29T10:56:30.144Z] Copying: 32/32 [B] (average 31 kBps) 00:07:24.647 00:07:24.647 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ yuenozpd5bx41ypikbz5bxpyo67lcuj7c96goczksx820vq0sr6eih6z049b3hen == \y\u\e\n\o\z\p\d\5\b\x\4\1\y\p\i\k\b\z\5\b\x\p\y\o\6\7\l\c\u\j\7\c\9\6\g\o\c\z\k\s\x\8\2\0\v\q\0\s\r\6\e\i\h\6\z\0\4\9\b\3\h\e\n ]] 00:07:24.647 00:07:24.647 real 0m0.400s 00:07:24.647 user 0m0.184s 00:07:24.647 sys 0m0.172s 00:07:24.647 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.647 10:56:29 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:24.647 ************************************ 00:07:24.647 END TEST dd_flag_append 00:07:24.647 ************************************ 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:24.647 ************************************ 00:07:24.647 START TEST dd_flag_directory 00:07:24.647 ************************************ 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.647 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.647 [2024-10-29 10:56:30.107415] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:24.647 [2024-10-29 10:56:30.107499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73211 ] 00:07:24.905 [2024-10-29 10:56:30.260125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.906 [2024-10-29 10:56:30.284672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.906 [2024-10-29 10:56:30.318654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.906 [2024-10-29 10:56:30.337201] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.906 [2024-10-29 10:56:30.337546] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:24.906 [2024-10-29 10:56:30.337580] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.165 [2024-10-29 10:56:30.407830] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.165 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:25.165 [2024-10-29 10:56:30.514412] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:25.165 [2024-10-29 10:56:30.514502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73215 ] 00:07:25.165 [2024-10-29 10:56:30.660088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.424 [2024-10-29 10:56:30.680474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.424 [2024-10-29 10:56:30.711996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.424 [2024-10-29 10:56:30.728602] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:25.424 [2024-10-29 10:56:30.728661] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:25.424 [2024-10-29 10:56:30.728695] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.424 [2024-10-29 10:56:30.791394] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.424 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:25.424 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.424 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.425 00:07:25.425 real 0m0.791s 00:07:25.425 user 0m0.388s 00:07:25.425 sys 0m0.193s 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:25.425 ************************************ 00:07:25.425 END TEST dd_flag_directory 00:07:25.425 ************************************ 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:25.425 ************************************ 00:07:25.425 START TEST dd_flag_nofollow 00:07:25.425 ************************************ 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.425 10:56:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.689 [2024-10-29 10:56:30.954996] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:25.689 [2024-10-29 10:56:30.955096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73249 ] 00:07:25.689 [2024-10-29 10:56:31.099322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.689 [2024-10-29 10:56:31.117837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.689 [2024-10-29 10:56:31.145601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.689 [2024-10-29 10:56:31.160691] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:25.689 [2024-10-29 10:56:31.160745] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:25.689 [2024-10-29 10:56:31.160779] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.957 [2024-10-29 10:56:31.222829] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.957 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:25.957 [2024-10-29 10:56:31.325468] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:25.957 [2024-10-29 10:56:31.325557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73253 ] 00:07:26.215 [2024-10-29 10:56:31.469675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.216 [2024-10-29 10:56:31.488362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.216 [2024-10-29 10:56:31.515831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.216 [2024-10-29 10:56:31.531045] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:26.216 [2024-10-29 10:56:31.531420] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:26.216 [2024-10-29 10:56:31.531466] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.216 [2024-10-29 10:56:31.590787] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:26.216 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:26.216 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.216 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:26.216 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:26.216 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:26.216 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.216 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:26.216 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:26.216 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:26.216 10:56:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.216 [2024-10-29 10:56:31.706880] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:26.216 [2024-10-29 10:56:31.706971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73255 ] 00:07:26.475 [2024-10-29 10:56:31.853964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.475 [2024-10-29 10:56:31.875197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.475 [2024-10-29 10:56:31.904006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.475  [2024-10-29T10:56:32.231Z] Copying: 512/512 [B] (average 500 kBps) 00:07:26.734 00:07:26.734 ************************************ 00:07:26.734 END TEST dd_flag_nofollow 00:07:26.734 ************************************ 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ pderkzwq5xsf9i1rtfa2au6yhyo0me5dhtipyrc6liiionwq16aj85whcpsigj4end7m5uc6kjxc99pcnslkuprksok4tw25ya3jzgas1p1to23aolins5chdb8z3q3d2iqq3t64qfy41maccw99djo85p0rcaperxii2wvsvjwu9xn1ybfjvcj660194a0lvb3lcjvxoakec24v5u0cfzigv7509jm2o772dkyg6v36n7kv42zlnh0tkd9glnal2btw0iaymlyx41tokboj1wdwjolsnlv7fx8n2epsoq0eb3zh1mqa0qoworww749obaf82bzls1invf9nwco0rgmg24zxgwpe5m8vaze4a87kdcex5951695xcrgxw29s3ya9p4yjapnzce4a3977a30d2ysszau8ahdkin7c2eu3cfquc0anoqnrm5q8tw6flwsdhpel62akaycjzndqrwtv53pjfqecin2bbyjl63owsu94b5ot86mgfhu2lsl2 == \p\d\e\r\k\z\w\q\5\x\s\f\9\i\1\r\t\f\a\2\a\u\6\y\h\y\o\0\m\e\5\d\h\t\i\p\y\r\c\6\l\i\i\i\o\n\w\q\1\6\a\j\8\5\w\h\c\p\s\i\g\j\4\e\n\d\7\m\5\u\c\6\k\j\x\c\9\9\p\c\n\s\l\k\u\p\r\k\s\o\k\4\t\w\2\5\y\a\3\j\z\g\a\s\1\p\1\t\o\2\3\a\o\l\i\n\s\5\c\h\d\b\8\z\3\q\3\d\2\i\q\q\3\t\6\4\q\f\y\4\1\m\a\c\c\w\9\9\d\j\o\8\5\p\0\r\c\a\p\e\r\x\i\i\2\w\v\s\v\j\w\u\9\x\n\1\y\b\f\j\v\c\j\6\6\0\1\9\4\a\0\l\v\b\3\l\c\j\v\x\o\a\k\e\c\2\4\v\5\u\0\c\f\z\i\g\v\7\5\0\9\j\m\2\o\7\7\2\d\k\y\g\6\v\3\6\n\7\k\v\4\2\z\l\n\h\0\t\k\d\9\g\l\n\a\l\2\b\t\w\0\i\a\y\m\l\y\x\4\1\t\o\k\b\o\j\1\w\d\w\j\o\l\s\n\l\v\7\f\x\8\n\2\e\p\s\o\q\0\e\b\3\z\h\1\m\q\a\0\q\o\w\o\r\w\w\7\4\9\o\b\a\f\8\2\b\z\l\s\1\i\n\v\f\9\n\w\c\o\0\r\g\m\g\2\4\z\x\g\w\p\e\5\m\8\v\a\z\e\4\a\8\7\k\d\c\e\x\5\9\5\1\6\9\5\x\c\r\g\x\w\2\9\s\3\y\a\9\p\4\y\j\a\p\n\z\c\e\4\a\3\9\7\7\a\3\0\d\2\y\s\s\z\a\u\8\a\h\d\k\i\n\7\c\2\e\u\3\c\f\q\u\c\0\a\n\o\q\n\r\m\5\q\8\t\w\6\f\l\w\s\d\h\p\e\l\6\2\a\k\a\y\c\j\z\n\d\q\r\w\t\v\5\3\p\j\f\q\e\c\i\n\2\b\b\y\j\l\6\3\o\w\s\u\9\4\b\5\o\t\8\6\m\g\f\h\u\2\l\s\l\2 ]] 00:07:26.734 00:07:26.734 real 0m1.153s 00:07:26.734 user 0m0.567s 00:07:26.734 sys 0m0.344s 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:26.734 ************************************ 00:07:26.734 START TEST dd_flag_noatime 00:07:26.734 ************************************ 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1730199391 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1730199392 00:07:26.734 10:56:32 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:27.671 10:56:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.929 [2024-10-29 10:56:33.171771] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:27.929 [2024-10-29 10:56:33.171904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73303 ] 00:07:27.929 [2024-10-29 10:56:33.326653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.929 [2024-10-29 10:56:33.350716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.929 [2024-10-29 10:56:33.385818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.929  [2024-10-29T10:56:33.686Z] Copying: 512/512 [B] (average 500 kBps) 00:07:28.189 00:07:28.189 10:56:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.189 10:56:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1730199391 )) 00:07:28.189 10:56:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.189 10:56:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1730199392 )) 00:07:28.189 10:56:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.189 [2024-10-29 10:56:33.597298] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:28.189 [2024-10-29 10:56:33.597410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73311 ] 00:07:28.448 [2024-10-29 10:56:33.745215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.448 [2024-10-29 10:56:33.766860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.448 [2024-10-29 10:56:33.797203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.448  [2024-10-29T10:56:33.945Z] Copying: 512/512 [B] (average 500 kBps) 00:07:28.448 00:07:28.449 10:56:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.449 10:56:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1730199393 )) 00:07:28.449 00:07:28.449 real 0m1.840s 00:07:28.449 user 0m0.417s 00:07:28.449 sys 0m0.381s 00:07:28.449 10:56:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.449 10:56:33 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:28.449 ************************************ 00:07:28.449 END TEST dd_flag_noatime 00:07:28.449 ************************************ 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:28.708 ************************************ 00:07:28.708 START TEST dd_flags_misc 00:07:28.708 ************************************ 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:28.708 10:56:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:28.708 [2024-10-29 10:56:34.033916] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:28.708 [2024-10-29 10:56:34.033997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73345 ] 00:07:28.708 [2024-10-29 10:56:34.173015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.708 [2024-10-29 10:56:34.192247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.967 [2024-10-29 10:56:34.221433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.967  [2024-10-29T10:56:34.464Z] Copying: 512/512 [B] (average 500 kBps) 00:07:28.967 00:07:28.967 10:56:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 368nihwg54g9tzhn3nav5gudz7echmovk60gds0lo4lgxj5ziunh9ar71ktdlfaku7vxjgahh8sq6ntneyhw3jqxjcndxvdo54qng5ep4amovt7vm3smpkli37zaycr7418s2cbockyiw39z7549l1y1zix448ze1f57sjl02440wt44gn9xgj4flmj98sth0pm4r0x22aliuv6t8eip53vrd7urw7qtq7s8i8ipueg8drctzv2wp3v06nvlf2fuczj01ekuiw4tjf1pyrr4tki0begu4bukhrv06ntrlskcjdt393e32zgfm40wr17rrh4ogtvhsv14kp1bcfyoex8ohe6t2h81oaywbmbucuuv3hdb5ol8ljzvmz8uygnegildqp9bu6yc48gepij13nmqtmk0ov8prau190rq2p8lahliffhb1ihknspyecnxyzpdim7rjn1xmttk1wh9ykd2yiqqpe40q34eofbvk2stvn6l0ovypmlfp4kpuuci == \3\6\8\n\i\h\w\g\5\4\g\9\t\z\h\n\3\n\a\v\5\g\u\d\z\7\e\c\h\m\o\v\k\6\0\g\d\s\0\l\o\4\l\g\x\j\5\z\i\u\n\h\9\a\r\7\1\k\t\d\l\f\a\k\u\7\v\x\j\g\a\h\h\8\s\q\6\n\t\n\e\y\h\w\3\j\q\x\j\c\n\d\x\v\d\o\5\4\q\n\g\5\e\p\4\a\m\o\v\t\7\v\m\3\s\m\p\k\l\i\3\7\z\a\y\c\r\7\4\1\8\s\2\c\b\o\c\k\y\i\w\3\9\z\7\5\4\9\l\1\y\1\z\i\x\4\4\8\z\e\1\f\5\7\s\j\l\0\2\4\4\0\w\t\4\4\g\n\9\x\g\j\4\f\l\m\j\9\8\s\t\h\0\p\m\4\r\0\x\2\2\a\l\i\u\v\6\t\8\e\i\p\5\3\v\r\d\7\u\r\w\7\q\t\q\7\s\8\i\8\i\p\u\e\g\8\d\r\c\t\z\v\2\w\p\3\v\0\6\n\v\l\f\2\f\u\c\z\j\0\1\e\k\u\i\w\4\t\j\f\1\p\y\r\r\4\t\k\i\0\b\e\g\u\4\b\u\k\h\r\v\0\6\n\t\r\l\s\k\c\j\d\t\3\9\3\e\3\2\z\g\f\m\4\0\w\r\1\7\r\r\h\4\o\g\t\v\h\s\v\1\4\k\p\1\b\c\f\y\o\e\x\8\o\h\e\6\t\2\h\8\1\o\a\y\w\b\m\b\u\c\u\u\v\3\h\d\b\5\o\l\8\l\j\z\v\m\z\8\u\y\g\n\e\g\i\l\d\q\p\9\b\u\6\y\c\4\8\g\e\p\i\j\1\3\n\m\q\t\m\k\0\o\v\8\p\r\a\u\1\9\0\r\q\2\p\8\l\a\h\l\i\f\f\h\b\1\i\h\k\n\s\p\y\e\c\n\x\y\z\p\d\i\m\7\r\j\n\1\x\m\t\t\k\1\w\h\9\y\k\d\2\y\i\q\q\p\e\4\0\q\3\4\e\o\f\b\v\k\2\s\t\v\n\6\l\0\o\v\y\p\m\l\f\p\4\k\p\u\u\c\i ]] 00:07:28.967 10:56:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:28.967 10:56:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:28.967 [2024-10-29 10:56:34.400139] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:28.967 [2024-10-29 10:56:34.400263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73349 ] 00:07:29.227 [2024-10-29 10:56:34.545078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.227 [2024-10-29 10:56:34.563717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.227 [2024-10-29 10:56:34.591685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.227  [2024-10-29T10:56:34.724Z] Copying: 512/512 [B] (average 500 kBps) 00:07:29.227 00:07:29.227 10:56:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 368nihwg54g9tzhn3nav5gudz7echmovk60gds0lo4lgxj5ziunh9ar71ktdlfaku7vxjgahh8sq6ntneyhw3jqxjcndxvdo54qng5ep4amovt7vm3smpkli37zaycr7418s2cbockyiw39z7549l1y1zix448ze1f57sjl02440wt44gn9xgj4flmj98sth0pm4r0x22aliuv6t8eip53vrd7urw7qtq7s8i8ipueg8drctzv2wp3v06nvlf2fuczj01ekuiw4tjf1pyrr4tki0begu4bukhrv06ntrlskcjdt393e32zgfm40wr17rrh4ogtvhsv14kp1bcfyoex8ohe6t2h81oaywbmbucuuv3hdb5ol8ljzvmz8uygnegildqp9bu6yc48gepij13nmqtmk0ov8prau190rq2p8lahliffhb1ihknspyecnxyzpdim7rjn1xmttk1wh9ykd2yiqqpe40q34eofbvk2stvn6l0ovypmlfp4kpuuci == \3\6\8\n\i\h\w\g\5\4\g\9\t\z\h\n\3\n\a\v\5\g\u\d\z\7\e\c\h\m\o\v\k\6\0\g\d\s\0\l\o\4\l\g\x\j\5\z\i\u\n\h\9\a\r\7\1\k\t\d\l\f\a\k\u\7\v\x\j\g\a\h\h\8\s\q\6\n\t\n\e\y\h\w\3\j\q\x\j\c\n\d\x\v\d\o\5\4\q\n\g\5\e\p\4\a\m\o\v\t\7\v\m\3\s\m\p\k\l\i\3\7\z\a\y\c\r\7\4\1\8\s\2\c\b\o\c\k\y\i\w\3\9\z\7\5\4\9\l\1\y\1\z\i\x\4\4\8\z\e\1\f\5\7\s\j\l\0\2\4\4\0\w\t\4\4\g\n\9\x\g\j\4\f\l\m\j\9\8\s\t\h\0\p\m\4\r\0\x\2\2\a\l\i\u\v\6\t\8\e\i\p\5\3\v\r\d\7\u\r\w\7\q\t\q\7\s\8\i\8\i\p\u\e\g\8\d\r\c\t\z\v\2\w\p\3\v\0\6\n\v\l\f\2\f\u\c\z\j\0\1\e\k\u\i\w\4\t\j\f\1\p\y\r\r\4\t\k\i\0\b\e\g\u\4\b\u\k\h\r\v\0\6\n\t\r\l\s\k\c\j\d\t\3\9\3\e\3\2\z\g\f\m\4\0\w\r\1\7\r\r\h\4\o\g\t\v\h\s\v\1\4\k\p\1\b\c\f\y\o\e\x\8\o\h\e\6\t\2\h\8\1\o\a\y\w\b\m\b\u\c\u\u\v\3\h\d\b\5\o\l\8\l\j\z\v\m\z\8\u\y\g\n\e\g\i\l\d\q\p\9\b\u\6\y\c\4\8\g\e\p\i\j\1\3\n\m\q\t\m\k\0\o\v\8\p\r\a\u\1\9\0\r\q\2\p\8\l\a\h\l\i\f\f\h\b\1\i\h\k\n\s\p\y\e\c\n\x\y\z\p\d\i\m\7\r\j\n\1\x\m\t\t\k\1\w\h\9\y\k\d\2\y\i\q\q\p\e\4\0\q\3\4\e\o\f\b\v\k\2\s\t\v\n\6\l\0\o\v\y\p\m\l\f\p\4\k\p\u\u\c\i ]] 00:07:29.227 10:56:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.227 10:56:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:29.487 [2024-10-29 10:56:34.780517] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:29.487 [2024-10-29 10:56:34.780774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73353 ] 00:07:29.487 [2024-10-29 10:56:34.928207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.487 [2024-10-29 10:56:34.949738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.487 [2024-10-29 10:56:34.978035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.746  [2024-10-29T10:56:35.243Z] Copying: 512/512 [B] (average 166 kBps) 00:07:29.746 00:07:29.747 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 368nihwg54g9tzhn3nav5gudz7echmovk60gds0lo4lgxj5ziunh9ar71ktdlfaku7vxjgahh8sq6ntneyhw3jqxjcndxvdo54qng5ep4amovt7vm3smpkli37zaycr7418s2cbockyiw39z7549l1y1zix448ze1f57sjl02440wt44gn9xgj4flmj98sth0pm4r0x22aliuv6t8eip53vrd7urw7qtq7s8i8ipueg8drctzv2wp3v06nvlf2fuczj01ekuiw4tjf1pyrr4tki0begu4bukhrv06ntrlskcjdt393e32zgfm40wr17rrh4ogtvhsv14kp1bcfyoex8ohe6t2h81oaywbmbucuuv3hdb5ol8ljzvmz8uygnegildqp9bu6yc48gepij13nmqtmk0ov8prau190rq2p8lahliffhb1ihknspyecnxyzpdim7rjn1xmttk1wh9ykd2yiqqpe40q34eofbvk2stvn6l0ovypmlfp4kpuuci == \3\6\8\n\i\h\w\g\5\4\g\9\t\z\h\n\3\n\a\v\5\g\u\d\z\7\e\c\h\m\o\v\k\6\0\g\d\s\0\l\o\4\l\g\x\j\5\z\i\u\n\h\9\a\r\7\1\k\t\d\l\f\a\k\u\7\v\x\j\g\a\h\h\8\s\q\6\n\t\n\e\y\h\w\3\j\q\x\j\c\n\d\x\v\d\o\5\4\q\n\g\5\e\p\4\a\m\o\v\t\7\v\m\3\s\m\p\k\l\i\3\7\z\a\y\c\r\7\4\1\8\s\2\c\b\o\c\k\y\i\w\3\9\z\7\5\4\9\l\1\y\1\z\i\x\4\4\8\z\e\1\f\5\7\s\j\l\0\2\4\4\0\w\t\4\4\g\n\9\x\g\j\4\f\l\m\j\9\8\s\t\h\0\p\m\4\r\0\x\2\2\a\l\i\u\v\6\t\8\e\i\p\5\3\v\r\d\7\u\r\w\7\q\t\q\7\s\8\i\8\i\p\u\e\g\8\d\r\c\t\z\v\2\w\p\3\v\0\6\n\v\l\f\2\f\u\c\z\j\0\1\e\k\u\i\w\4\t\j\f\1\p\y\r\r\4\t\k\i\0\b\e\g\u\4\b\u\k\h\r\v\0\6\n\t\r\l\s\k\c\j\d\t\3\9\3\e\3\2\z\g\f\m\4\0\w\r\1\7\r\r\h\4\o\g\t\v\h\s\v\1\4\k\p\1\b\c\f\y\o\e\x\8\o\h\e\6\t\2\h\8\1\o\a\y\w\b\m\b\u\c\u\u\v\3\h\d\b\5\o\l\8\l\j\z\v\m\z\8\u\y\g\n\e\g\i\l\d\q\p\9\b\u\6\y\c\4\8\g\e\p\i\j\1\3\n\m\q\t\m\k\0\o\v\8\p\r\a\u\1\9\0\r\q\2\p\8\l\a\h\l\i\f\f\h\b\1\i\h\k\n\s\p\y\e\c\n\x\y\z\p\d\i\m\7\r\j\n\1\x\m\t\t\k\1\w\h\9\y\k\d\2\y\i\q\q\p\e\4\0\q\3\4\e\o\f\b\v\k\2\s\t\v\n\6\l\0\o\v\y\p\m\l\f\p\4\k\p\u\u\c\i ]] 00:07:29.747 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:29.747 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:29.747 [2024-10-29 10:56:35.173016] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:29.747 [2024-10-29 10:56:35.173109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73368 ] 00:07:30.006 [2024-10-29 10:56:35.317981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.006 [2024-10-29 10:56:35.336724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.006 [2024-10-29 10:56:35.364608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.006  [2024-10-29T10:56:35.503Z] Copying: 512/512 [B] (average 250 kBps) 00:07:30.006 00:07:30.006 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 368nihwg54g9tzhn3nav5gudz7echmovk60gds0lo4lgxj5ziunh9ar71ktdlfaku7vxjgahh8sq6ntneyhw3jqxjcndxvdo54qng5ep4amovt7vm3smpkli37zaycr7418s2cbockyiw39z7549l1y1zix448ze1f57sjl02440wt44gn9xgj4flmj98sth0pm4r0x22aliuv6t8eip53vrd7urw7qtq7s8i8ipueg8drctzv2wp3v06nvlf2fuczj01ekuiw4tjf1pyrr4tki0begu4bukhrv06ntrlskcjdt393e32zgfm40wr17rrh4ogtvhsv14kp1bcfyoex8ohe6t2h81oaywbmbucuuv3hdb5ol8ljzvmz8uygnegildqp9bu6yc48gepij13nmqtmk0ov8prau190rq2p8lahliffhb1ihknspyecnxyzpdim7rjn1xmttk1wh9ykd2yiqqpe40q34eofbvk2stvn6l0ovypmlfp4kpuuci == \3\6\8\n\i\h\w\g\5\4\g\9\t\z\h\n\3\n\a\v\5\g\u\d\z\7\e\c\h\m\o\v\k\6\0\g\d\s\0\l\o\4\l\g\x\j\5\z\i\u\n\h\9\a\r\7\1\k\t\d\l\f\a\k\u\7\v\x\j\g\a\h\h\8\s\q\6\n\t\n\e\y\h\w\3\j\q\x\j\c\n\d\x\v\d\o\5\4\q\n\g\5\e\p\4\a\m\o\v\t\7\v\m\3\s\m\p\k\l\i\3\7\z\a\y\c\r\7\4\1\8\s\2\c\b\o\c\k\y\i\w\3\9\z\7\5\4\9\l\1\y\1\z\i\x\4\4\8\z\e\1\f\5\7\s\j\l\0\2\4\4\0\w\t\4\4\g\n\9\x\g\j\4\f\l\m\j\9\8\s\t\h\0\p\m\4\r\0\x\2\2\a\l\i\u\v\6\t\8\e\i\p\5\3\v\r\d\7\u\r\w\7\q\t\q\7\s\8\i\8\i\p\u\e\g\8\d\r\c\t\z\v\2\w\p\3\v\0\6\n\v\l\f\2\f\u\c\z\j\0\1\e\k\u\i\w\4\t\j\f\1\p\y\r\r\4\t\k\i\0\b\e\g\u\4\b\u\k\h\r\v\0\6\n\t\r\l\s\k\c\j\d\t\3\9\3\e\3\2\z\g\f\m\4\0\w\r\1\7\r\r\h\4\o\g\t\v\h\s\v\1\4\k\p\1\b\c\f\y\o\e\x\8\o\h\e\6\t\2\h\8\1\o\a\y\w\b\m\b\u\c\u\u\v\3\h\d\b\5\o\l\8\l\j\z\v\m\z\8\u\y\g\n\e\g\i\l\d\q\p\9\b\u\6\y\c\4\8\g\e\p\i\j\1\3\n\m\q\t\m\k\0\o\v\8\p\r\a\u\1\9\0\r\q\2\p\8\l\a\h\l\i\f\f\h\b\1\i\h\k\n\s\p\y\e\c\n\x\y\z\p\d\i\m\7\r\j\n\1\x\m\t\t\k\1\w\h\9\y\k\d\2\y\i\q\q\p\e\4\0\q\3\4\e\o\f\b\v\k\2\s\t\v\n\6\l\0\o\v\y\p\m\l\f\p\4\k\p\u\u\c\i ]] 00:07:30.006 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:30.006 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:30.006 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:30.006 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:30.006 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:30.006 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:30.266 [2024-10-29 10:56:35.562424] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:30.266 [2024-10-29 10:56:35.562667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73372 ] 00:07:30.266 [2024-10-29 10:56:35.707619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.266 [2024-10-29 10:56:35.726753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.266 [2024-10-29 10:56:35.755007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.526  [2024-10-29T10:56:36.023Z] Copying: 512/512 [B] (average 500 kBps) 00:07:30.526 00:07:30.526 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s5rtoc9k5ymv5aseu51n7bq195vn13xqms3r030onv28bw3xqwh4y15f6zbnpu7kjkuuh5a55cdey4xh901yzukc5d3l3g7c0yrg1q357rcgw0s5lov7sdm4pcxb7di30pkjnj4dizpraejalql50eb11g1os9tgdj5be0xkr8idw9yipm5oede4nj6505kd53uvuzuza9793cnv5pkrp8y9tqqoyjttb47iwdnskmda7vrr8ydvs81ldz2u4ltvbupsuqerjpjhigce0jcd5pi8ys7m3cit3tttvnj4coeulc71nxrruk04w1gjwi3rclwzhbobq0bcfxjp097e0ideflfvneuedfha8bissp0g3nouux1wvxwtoqec271epnq6fbvrrq7qcf43qq55pfrb31w7rx4g76e6dcsb1uy9a8lbtc1nt5rolc64nx6sk70iaukddr5lju7hgdxq1u373qvaepjjhd6r7ygq8r3fa2oaxugcxw8xhdt12h5a == \s\5\r\t\o\c\9\k\5\y\m\v\5\a\s\e\u\5\1\n\7\b\q\1\9\5\v\n\1\3\x\q\m\s\3\r\0\3\0\o\n\v\2\8\b\w\3\x\q\w\h\4\y\1\5\f\6\z\b\n\p\u\7\k\j\k\u\u\h\5\a\5\5\c\d\e\y\4\x\h\9\0\1\y\z\u\k\c\5\d\3\l\3\g\7\c\0\y\r\g\1\q\3\5\7\r\c\g\w\0\s\5\l\o\v\7\s\d\m\4\p\c\x\b\7\d\i\3\0\p\k\j\n\j\4\d\i\z\p\r\a\e\j\a\l\q\l\5\0\e\b\1\1\g\1\o\s\9\t\g\d\j\5\b\e\0\x\k\r\8\i\d\w\9\y\i\p\m\5\o\e\d\e\4\n\j\6\5\0\5\k\d\5\3\u\v\u\z\u\z\a\9\7\9\3\c\n\v\5\p\k\r\p\8\y\9\t\q\q\o\y\j\t\t\b\4\7\i\w\d\n\s\k\m\d\a\7\v\r\r\8\y\d\v\s\8\1\l\d\z\2\u\4\l\t\v\b\u\p\s\u\q\e\r\j\p\j\h\i\g\c\e\0\j\c\d\5\p\i\8\y\s\7\m\3\c\i\t\3\t\t\t\v\n\j\4\c\o\e\u\l\c\7\1\n\x\r\r\u\k\0\4\w\1\g\j\w\i\3\r\c\l\w\z\h\b\o\b\q\0\b\c\f\x\j\p\0\9\7\e\0\i\d\e\f\l\f\v\n\e\u\e\d\f\h\a\8\b\i\s\s\p\0\g\3\n\o\u\u\x\1\w\v\x\w\t\o\q\e\c\2\7\1\e\p\n\q\6\f\b\v\r\r\q\7\q\c\f\4\3\q\q\5\5\p\f\r\b\3\1\w\7\r\x\4\g\7\6\e\6\d\c\s\b\1\u\y\9\a\8\l\b\t\c\1\n\t\5\r\o\l\c\6\4\n\x\6\s\k\7\0\i\a\u\k\d\d\r\5\l\j\u\7\h\g\d\x\q\1\u\3\7\3\q\v\a\e\p\j\j\h\d\6\r\7\y\g\q\8\r\3\f\a\2\o\a\x\u\g\c\x\w\8\x\h\d\t\1\2\h\5\a ]] 00:07:30.526 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:30.526 10:56:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:30.526 [2024-10-29 10:56:35.950060] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:30.526 [2024-10-29 10:56:35.950148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73376 ] 00:07:30.786 [2024-10-29 10:56:36.096503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.786 [2024-10-29 10:56:36.115059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.786 [2024-10-29 10:56:36.142897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.786  [2024-10-29T10:56:36.283Z] Copying: 512/512 [B] (average 500 kBps) 00:07:30.786 00:07:30.786 10:56:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s5rtoc9k5ymv5aseu51n7bq195vn13xqms3r030onv28bw3xqwh4y15f6zbnpu7kjkuuh5a55cdey4xh901yzukc5d3l3g7c0yrg1q357rcgw0s5lov7sdm4pcxb7di30pkjnj4dizpraejalql50eb11g1os9tgdj5be0xkr8idw9yipm5oede4nj6505kd53uvuzuza9793cnv5pkrp8y9tqqoyjttb47iwdnskmda7vrr8ydvs81ldz2u4ltvbupsuqerjpjhigce0jcd5pi8ys7m3cit3tttvnj4coeulc71nxrruk04w1gjwi3rclwzhbobq0bcfxjp097e0ideflfvneuedfha8bissp0g3nouux1wvxwtoqec271epnq6fbvrrq7qcf43qq55pfrb31w7rx4g76e6dcsb1uy9a8lbtc1nt5rolc64nx6sk70iaukddr5lju7hgdxq1u373qvaepjjhd6r7ygq8r3fa2oaxugcxw8xhdt12h5a == \s\5\r\t\o\c\9\k\5\y\m\v\5\a\s\e\u\5\1\n\7\b\q\1\9\5\v\n\1\3\x\q\m\s\3\r\0\3\0\o\n\v\2\8\b\w\3\x\q\w\h\4\y\1\5\f\6\z\b\n\p\u\7\k\j\k\u\u\h\5\a\5\5\c\d\e\y\4\x\h\9\0\1\y\z\u\k\c\5\d\3\l\3\g\7\c\0\y\r\g\1\q\3\5\7\r\c\g\w\0\s\5\l\o\v\7\s\d\m\4\p\c\x\b\7\d\i\3\0\p\k\j\n\j\4\d\i\z\p\r\a\e\j\a\l\q\l\5\0\e\b\1\1\g\1\o\s\9\t\g\d\j\5\b\e\0\x\k\r\8\i\d\w\9\y\i\p\m\5\o\e\d\e\4\n\j\6\5\0\5\k\d\5\3\u\v\u\z\u\z\a\9\7\9\3\c\n\v\5\p\k\r\p\8\y\9\t\q\q\o\y\j\t\t\b\4\7\i\w\d\n\s\k\m\d\a\7\v\r\r\8\y\d\v\s\8\1\l\d\z\2\u\4\l\t\v\b\u\p\s\u\q\e\r\j\p\j\h\i\g\c\e\0\j\c\d\5\p\i\8\y\s\7\m\3\c\i\t\3\t\t\t\v\n\j\4\c\o\e\u\l\c\7\1\n\x\r\r\u\k\0\4\w\1\g\j\w\i\3\r\c\l\w\z\h\b\o\b\q\0\b\c\f\x\j\p\0\9\7\e\0\i\d\e\f\l\f\v\n\e\u\e\d\f\h\a\8\b\i\s\s\p\0\g\3\n\o\u\u\x\1\w\v\x\w\t\o\q\e\c\2\7\1\e\p\n\q\6\f\b\v\r\r\q\7\q\c\f\4\3\q\q\5\5\p\f\r\b\3\1\w\7\r\x\4\g\7\6\e\6\d\c\s\b\1\u\y\9\a\8\l\b\t\c\1\n\t\5\r\o\l\c\6\4\n\x\6\s\k\7\0\i\a\u\k\d\d\r\5\l\j\u\7\h\g\d\x\q\1\u\3\7\3\q\v\a\e\p\j\j\h\d\6\r\7\y\g\q\8\r\3\f\a\2\o\a\x\u\g\c\x\w\8\x\h\d\t\1\2\h\5\a ]] 00:07:30.786 10:56:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:30.786 10:56:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:31.046 [2024-10-29 10:56:36.330824] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:31.046 [2024-10-29 10:56:36.331078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73391 ] 00:07:31.046 [2024-10-29 10:56:36.477304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.046 [2024-10-29 10:56:36.499581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.046 [2024-10-29 10:56:36.529229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.305  [2024-10-29T10:56:36.802Z] Copying: 512/512 [B] (average 125 kBps) 00:07:31.305 00:07:31.305 10:56:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s5rtoc9k5ymv5aseu51n7bq195vn13xqms3r030onv28bw3xqwh4y15f6zbnpu7kjkuuh5a55cdey4xh901yzukc5d3l3g7c0yrg1q357rcgw0s5lov7sdm4pcxb7di30pkjnj4dizpraejalql50eb11g1os9tgdj5be0xkr8idw9yipm5oede4nj6505kd53uvuzuza9793cnv5pkrp8y9tqqoyjttb47iwdnskmda7vrr8ydvs81ldz2u4ltvbupsuqerjpjhigce0jcd5pi8ys7m3cit3tttvnj4coeulc71nxrruk04w1gjwi3rclwzhbobq0bcfxjp097e0ideflfvneuedfha8bissp0g3nouux1wvxwtoqec271epnq6fbvrrq7qcf43qq55pfrb31w7rx4g76e6dcsb1uy9a8lbtc1nt5rolc64nx6sk70iaukddr5lju7hgdxq1u373qvaepjjhd6r7ygq8r3fa2oaxugcxw8xhdt12h5a == \s\5\r\t\o\c\9\k\5\y\m\v\5\a\s\e\u\5\1\n\7\b\q\1\9\5\v\n\1\3\x\q\m\s\3\r\0\3\0\o\n\v\2\8\b\w\3\x\q\w\h\4\y\1\5\f\6\z\b\n\p\u\7\k\j\k\u\u\h\5\a\5\5\c\d\e\y\4\x\h\9\0\1\y\z\u\k\c\5\d\3\l\3\g\7\c\0\y\r\g\1\q\3\5\7\r\c\g\w\0\s\5\l\o\v\7\s\d\m\4\p\c\x\b\7\d\i\3\0\p\k\j\n\j\4\d\i\z\p\r\a\e\j\a\l\q\l\5\0\e\b\1\1\g\1\o\s\9\t\g\d\j\5\b\e\0\x\k\r\8\i\d\w\9\y\i\p\m\5\o\e\d\e\4\n\j\6\5\0\5\k\d\5\3\u\v\u\z\u\z\a\9\7\9\3\c\n\v\5\p\k\r\p\8\y\9\t\q\q\o\y\j\t\t\b\4\7\i\w\d\n\s\k\m\d\a\7\v\r\r\8\y\d\v\s\8\1\l\d\z\2\u\4\l\t\v\b\u\p\s\u\q\e\r\j\p\j\h\i\g\c\e\0\j\c\d\5\p\i\8\y\s\7\m\3\c\i\t\3\t\t\t\v\n\j\4\c\o\e\u\l\c\7\1\n\x\r\r\u\k\0\4\w\1\g\j\w\i\3\r\c\l\w\z\h\b\o\b\q\0\b\c\f\x\j\p\0\9\7\e\0\i\d\e\f\l\f\v\n\e\u\e\d\f\h\a\8\b\i\s\s\p\0\g\3\n\o\u\u\x\1\w\v\x\w\t\o\q\e\c\2\7\1\e\p\n\q\6\f\b\v\r\r\q\7\q\c\f\4\3\q\q\5\5\p\f\r\b\3\1\w\7\r\x\4\g\7\6\e\6\d\c\s\b\1\u\y\9\a\8\l\b\t\c\1\n\t\5\r\o\l\c\6\4\n\x\6\s\k\7\0\i\a\u\k\d\d\r\5\l\j\u\7\h\g\d\x\q\1\u\3\7\3\q\v\a\e\p\j\j\h\d\6\r\7\y\g\q\8\r\3\f\a\2\o\a\x\u\g\c\x\w\8\x\h\d\t\1\2\h\5\a ]] 00:07:31.305 10:56:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:31.305 10:56:36 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:31.305 [2024-10-29 10:56:36.714046] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:31.305 [2024-10-29 10:56:36.714306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73395 ] 00:07:31.563 [2024-10-29 10:56:36.851447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.563 [2024-10-29 10:56:36.870667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.563 [2024-10-29 10:56:36.904474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.563  [2024-10-29T10:56:37.060Z] Copying: 512/512 [B] (average 250 kBps) 00:07:31.563 00:07:31.563 10:56:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ s5rtoc9k5ymv5aseu51n7bq195vn13xqms3r030onv28bw3xqwh4y15f6zbnpu7kjkuuh5a55cdey4xh901yzukc5d3l3g7c0yrg1q357rcgw0s5lov7sdm4pcxb7di30pkjnj4dizpraejalql50eb11g1os9tgdj5be0xkr8idw9yipm5oede4nj6505kd53uvuzuza9793cnv5pkrp8y9tqqoyjttb47iwdnskmda7vrr8ydvs81ldz2u4ltvbupsuqerjpjhigce0jcd5pi8ys7m3cit3tttvnj4coeulc71nxrruk04w1gjwi3rclwzhbobq0bcfxjp097e0ideflfvneuedfha8bissp0g3nouux1wvxwtoqec271epnq6fbvrrq7qcf43qq55pfrb31w7rx4g76e6dcsb1uy9a8lbtc1nt5rolc64nx6sk70iaukddr5lju7hgdxq1u373qvaepjjhd6r7ygq8r3fa2oaxugcxw8xhdt12h5a == \s\5\r\t\o\c\9\k\5\y\m\v\5\a\s\e\u\5\1\n\7\b\q\1\9\5\v\n\1\3\x\q\m\s\3\r\0\3\0\o\n\v\2\8\b\w\3\x\q\w\h\4\y\1\5\f\6\z\b\n\p\u\7\k\j\k\u\u\h\5\a\5\5\c\d\e\y\4\x\h\9\0\1\y\z\u\k\c\5\d\3\l\3\g\7\c\0\y\r\g\1\q\3\5\7\r\c\g\w\0\s\5\l\o\v\7\s\d\m\4\p\c\x\b\7\d\i\3\0\p\k\j\n\j\4\d\i\z\p\r\a\e\j\a\l\q\l\5\0\e\b\1\1\g\1\o\s\9\t\g\d\j\5\b\e\0\x\k\r\8\i\d\w\9\y\i\p\m\5\o\e\d\e\4\n\j\6\5\0\5\k\d\5\3\u\v\u\z\u\z\a\9\7\9\3\c\n\v\5\p\k\r\p\8\y\9\t\q\q\o\y\j\t\t\b\4\7\i\w\d\n\s\k\m\d\a\7\v\r\r\8\y\d\v\s\8\1\l\d\z\2\u\4\l\t\v\b\u\p\s\u\q\e\r\j\p\j\h\i\g\c\e\0\j\c\d\5\p\i\8\y\s\7\m\3\c\i\t\3\t\t\t\v\n\j\4\c\o\e\u\l\c\7\1\n\x\r\r\u\k\0\4\w\1\g\j\w\i\3\r\c\l\w\z\h\b\o\b\q\0\b\c\f\x\j\p\0\9\7\e\0\i\d\e\f\l\f\v\n\e\u\e\d\f\h\a\8\b\i\s\s\p\0\g\3\n\o\u\u\x\1\w\v\x\w\t\o\q\e\c\2\7\1\e\p\n\q\6\f\b\v\r\r\q\7\q\c\f\4\3\q\q\5\5\p\f\r\b\3\1\w\7\r\x\4\g\7\6\e\6\d\c\s\b\1\u\y\9\a\8\l\b\t\c\1\n\t\5\r\o\l\c\6\4\n\x\6\s\k\7\0\i\a\u\k\d\d\r\5\l\j\u\7\h\g\d\x\q\1\u\3\7\3\q\v\a\e\p\j\j\h\d\6\r\7\y\g\q\8\r\3\f\a\2\o\a\x\u\g\c\x\w\8\x\h\d\t\1\2\h\5\a ]] 00:07:31.563 00:07:31.563 real 0m3.054s 00:07:31.563 user 0m1.488s 00:07:31.563 sys 0m1.352s 00:07:31.563 ************************************ 00:07:31.563 END TEST dd_flags_misc 00:07:31.563 ************************************ 00:07:31.563 10:56:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.563 10:56:37 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:31.822 * Second test run, disabling liburing, forcing AIO 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:31.822 ************************************ 00:07:31.822 START TEST dd_flag_append_forced_aio 00:07:31.822 ************************************ 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=e1wfueltbamjbkqf38abgqcm5cv7y2ln 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=vabo5zgswtysf3lxsdyrb3pe4cdr7qqo 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s e1wfueltbamjbkqf38abgqcm5cv7y2ln 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s vabo5zgswtysf3lxsdyrb3pe4cdr7qqo 00:07:31.822 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:31.822 [2024-10-29 10:56:37.152032] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:31.822 [2024-10-29 10:56:37.152135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73429 ] 00:07:31.822 [2024-10-29 10:56:37.299355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.822 [2024-10-29 10:56:37.318439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.082 [2024-10-29 10:56:37.347004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.082  [2024-10-29T10:56:37.579Z] Copying: 32/32 [B] (average 31 kBps) 00:07:32.082 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ vabo5zgswtysf3lxsdyrb3pe4cdr7qqoe1wfueltbamjbkqf38abgqcm5cv7y2ln == \v\a\b\o\5\z\g\s\w\t\y\s\f\3\l\x\s\d\y\r\b\3\p\e\4\c\d\r\7\q\q\o\e\1\w\f\u\e\l\t\b\a\m\j\b\k\q\f\3\8\a\b\g\q\c\m\5\c\v\7\y\2\l\n ]] 00:07:32.082 00:07:32.082 real 0m0.400s 00:07:32.082 user 0m0.186s 00:07:32.082 sys 0m0.096s 00:07:32.082 ************************************ 00:07:32.082 END TEST dd_flag_append_forced_aio 00:07:32.082 ************************************ 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:32.082 ************************************ 00:07:32.082 START TEST dd_flag_directory_forced_aio 00:07:32.082 ************************************ 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:32.082 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:32.341 [2024-10-29 10:56:37.602565] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:32.341 [2024-10-29 10:56:37.602655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73450 ] 00:07:32.341 [2024-10-29 10:56:37.746884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.341 [2024-10-29 10:56:37.766141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.341 [2024-10-29 10:56:37.794495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.341 [2024-10-29 10:56:37.810002] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:32.341 [2024-10-29 10:56:37.810050] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:32.341 [2024-10-29 10:56:37.810084] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.601 [2024-10-29 10:56:37.870669] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:32.601 10:56:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:32.601 [2024-10-29 10:56:37.987515] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:32.601 [2024-10-29 10:56:37.987615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73454 ] 00:07:32.860 [2024-10-29 10:56:38.133004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.860 [2024-10-29 10:56:38.151720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.860 [2024-10-29 10:56:38.179611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.860 [2024-10-29 10:56:38.194811] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:32.860 [2024-10-29 10:56:38.195117] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:32.860 [2024-10-29 10:56:38.195159] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.860 [2024-10-29 10:56:38.254172] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:32.860 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:32.860 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.860 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:32.860 ************************************ 00:07:32.860 END TEST dd_flag_directory_forced_aio 00:07:32.860 ************************************ 00:07:32.860 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:32.860 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:32.861 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.861 00:07:32.861 real 0m0.766s 00:07:32.861 user 0m0.365s 00:07:32.861 sys 0m0.193s 00:07:32.861 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:32.861 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:32.861 10:56:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:32.861 10:56:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:32.861 10:56:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:32.861 10:56:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:33.120 ************************************ 00:07:33.120 START TEST dd_flag_nofollow_forced_aio 00:07:33.120 ************************************ 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.120 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.121 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.121 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.121 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.121 [2024-10-29 10:56:38.435122] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:33.121 [2024-10-29 10:56:38.435216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73488 ] 00:07:33.121 [2024-10-29 10:56:38.580848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.121 [2024-10-29 10:56:38.599487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.380 [2024-10-29 10:56:38.628419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.380 [2024-10-29 10:56:38.643492] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:33.380 [2024-10-29 10:56:38.643849] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:33.380 [2024-10-29 10:56:38.643894] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.380 [2024-10-29 10:56:38.702199] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.380 10:56:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:33.380 [2024-10-29 10:56:38.806625] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:33.380 [2024-10-29 10:56:38.806868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73492 ] 00:07:33.639 [2024-10-29 10:56:38.950785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.639 [2024-10-29 10:56:38.971754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.639 [2024-10-29 10:56:39.002461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.639 [2024-10-29 10:56:39.019761] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:33.639 [2024-10-29 10:56:39.019849] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:33.639 [2024-10-29 10:56:39.019886] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.639 [2024-10-29 10:56:39.084098] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:33.639 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:33.639 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:33.639 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:33.639 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:33.639 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:33.639 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:33.639 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:33.639 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:33.639 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:33.898 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:33.898 [2024-10-29 10:56:39.182502] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:33.898 [2024-10-29 10:56:39.182798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73505 ] 00:07:33.898 [2024-10-29 10:56:39.322511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.898 [2024-10-29 10:56:39.342969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.898 [2024-10-29 10:56:39.371466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.898  [2024-10-29T10:56:39.655Z] Copying: 512/512 [B] (average 500 kBps) 00:07:34.158 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 3rlnpvxjnucft5vv3w7itk4tu9s02m5d23ipxfaadmflesfq60sm71cpmp48fxw6lurqfery0rreqgukywxltgz5u6knsjaud9odj0uyatjce6azf02g1nbcwkmqxu3moy5ojd4sxppxye6wj8coace0p2hj9e8npc4nm03ee8hhuclux0hdl6x9yb4a6a1h8bkyfzhqo4moglz6o3y975wqbc9g7k5qqewdv1d0ukew3gsgy6iej4wpz851erctjhoqtnf1a0duskbtl6m368axu7louht5qrd1jf0mdd5dq81550zlgpxhfjpakyt3vmlg76j7cxipfbipoujs45dsguvhgntl3idqzybgl12me8zfjur9577pqn2zyjou8070m19jqo4az2u2qntc0ardi8p78gbp8s3jqnbumo5qptsd0ukk2kef49y3npbpx40khjpul302oqdj8zsfqcmdejwucdnemuz8uce8pxqlyr5h4mvbmd2k5tpsx3fm == \3\r\l\n\p\v\x\j\n\u\c\f\t\5\v\v\3\w\7\i\t\k\4\t\u\9\s\0\2\m\5\d\2\3\i\p\x\f\a\a\d\m\f\l\e\s\f\q\6\0\s\m\7\1\c\p\m\p\4\8\f\x\w\6\l\u\r\q\f\e\r\y\0\r\r\e\q\g\u\k\y\w\x\l\t\g\z\5\u\6\k\n\s\j\a\u\d\9\o\d\j\0\u\y\a\t\j\c\e\6\a\z\f\0\2\g\1\n\b\c\w\k\m\q\x\u\3\m\o\y\5\o\j\d\4\s\x\p\p\x\y\e\6\w\j\8\c\o\a\c\e\0\p\2\h\j\9\e\8\n\p\c\4\n\m\0\3\e\e\8\h\h\u\c\l\u\x\0\h\d\l\6\x\9\y\b\4\a\6\a\1\h\8\b\k\y\f\z\h\q\o\4\m\o\g\l\z\6\o\3\y\9\7\5\w\q\b\c\9\g\7\k\5\q\q\e\w\d\v\1\d\0\u\k\e\w\3\g\s\g\y\6\i\e\j\4\w\p\z\8\5\1\e\r\c\t\j\h\o\q\t\n\f\1\a\0\d\u\s\k\b\t\l\6\m\3\6\8\a\x\u\7\l\o\u\h\t\5\q\r\d\1\j\f\0\m\d\d\5\d\q\8\1\5\5\0\z\l\g\p\x\h\f\j\p\a\k\y\t\3\v\m\l\g\7\6\j\7\c\x\i\p\f\b\i\p\o\u\j\s\4\5\d\s\g\u\v\h\g\n\t\l\3\i\d\q\z\y\b\g\l\1\2\m\e\8\z\f\j\u\r\9\5\7\7\p\q\n\2\z\y\j\o\u\8\0\7\0\m\1\9\j\q\o\4\a\z\2\u\2\q\n\t\c\0\a\r\d\i\8\p\7\8\g\b\p\8\s\3\j\q\n\b\u\m\o\5\q\p\t\s\d\0\u\k\k\2\k\e\f\4\9\y\3\n\p\b\p\x\4\0\k\h\j\p\u\l\3\0\2\o\q\d\j\8\z\s\f\q\c\m\d\e\j\w\u\c\d\n\e\m\u\z\8\u\c\e\8\p\x\q\l\y\r\5\h\4\m\v\b\m\d\2\k\5\t\p\s\x\3\f\m ]] 00:07:34.158 00:07:34.158 real 0m1.152s 00:07:34.158 user 0m0.552s 00:07:34.158 sys 0m0.268s 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.158 ************************************ 00:07:34.158 END TEST dd_flag_nofollow_forced_aio 00:07:34.158 ************************************ 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:34.158 ************************************ 00:07:34.158 START TEST dd_flag_noatime_forced_aio 00:07:34.158 ************************************ 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1730199399 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1730199399 00:07:34.158 10:56:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:35.533 10:56:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.533 [2024-10-29 10:56:40.652656] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:35.533 [2024-10-29 10:56:40.652760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73540 ] 00:07:35.533 [2024-10-29 10:56:40.806651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.533 [2024-10-29 10:56:40.830747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.533 [2024-10-29 10:56:40.864553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.533  [2024-10-29T10:56:41.030Z] Copying: 512/512 [B] (average 500 kBps) 00:07:35.533 00:07:35.533 10:56:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.533 10:56:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1730199399 )) 00:07:35.533 10:56:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.533 10:56:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1730199399 )) 00:07:35.533 10:56:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.792 [2024-10-29 10:56:41.082266] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:35.792 [2024-10-29 10:56:41.082365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73546 ] 00:07:35.792 [2024-10-29 10:56:41.231164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.792 [2024-10-29 10:56:41.250985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.792 [2024-10-29 10:56:41.280487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.051  [2024-10-29T10:56:41.548Z] Copying: 512/512 [B] (average 500 kBps) 00:07:36.051 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1730199401 )) 00:07:36.051 00:07:36.051 real 0m1.855s 00:07:36.051 user 0m0.410s 00:07:36.051 sys 0m0.205s 00:07:36.051 ************************************ 00:07:36.051 END TEST dd_flag_noatime_forced_aio 00:07:36.051 ************************************ 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:36.051 ************************************ 00:07:36.051 START TEST dd_flags_misc_forced_aio 00:07:36.051 ************************************ 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:36.051 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:36.052 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.052 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:36.052 [2024-10-29 10:56:41.544175] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:36.052 [2024-10-29 10:56:41.544272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73578 ] 00:07:36.310 [2024-10-29 10:56:41.686932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.310 [2024-10-29 10:56:41.705596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.310 [2024-10-29 10:56:41.733193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.310  [2024-10-29T10:56:42.066Z] Copying: 512/512 [B] (average 500 kBps) 00:07:36.569 00:07:36.569 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cmxmzi7oeglvi6yth2i3bt24e6ectvw3g1waah5rlana4v06ti2vmcwrhq4le0v0sv1yzw74bonlunkuhbmgqqoy98wclmrtkt626kcne6ylw60qznqbp5jeha2uwshgcoe9bzbt21syj80ijat85iukgtrlnepg16xd6yb7pnfy4gdy5ylcnnluuu6f6p6ymkl7aiyg9vyy55x708e13j16umvsaexzcuwgx9twc0598k78b9ur9ixt7274916zrtf92rv9ovhqg4vin5fl3zelbtoghjdpqosg8mrwiaqp3a0z5x47zzb1akmeiuj7j1brpknd8zgpzom84fqgda8nfby4ofjgf95o6g7xf7vipe6rc91pb11qu9hebposc8ihz482focro0bm367qdrkb4jy7bqdfn28n10zm894ya9xwwq86a7dl2444ghvd3vhvwxsoqcxs60uoaep5qbsn6cqxip4dj09ekx51crleuzwva4rcdk4oqbstlalu == \c\m\x\m\z\i\7\o\e\g\l\v\i\6\y\t\h\2\i\3\b\t\2\4\e\6\e\c\t\v\w\3\g\1\w\a\a\h\5\r\l\a\n\a\4\v\0\6\t\i\2\v\m\c\w\r\h\q\4\l\e\0\v\0\s\v\1\y\z\w\7\4\b\o\n\l\u\n\k\u\h\b\m\g\q\q\o\y\9\8\w\c\l\m\r\t\k\t\6\2\6\k\c\n\e\6\y\l\w\6\0\q\z\n\q\b\p\5\j\e\h\a\2\u\w\s\h\g\c\o\e\9\b\z\b\t\2\1\s\y\j\8\0\i\j\a\t\8\5\i\u\k\g\t\r\l\n\e\p\g\1\6\x\d\6\y\b\7\p\n\f\y\4\g\d\y\5\y\l\c\n\n\l\u\u\u\6\f\6\p\6\y\m\k\l\7\a\i\y\g\9\v\y\y\5\5\x\7\0\8\e\1\3\j\1\6\u\m\v\s\a\e\x\z\c\u\w\g\x\9\t\w\c\0\5\9\8\k\7\8\b\9\u\r\9\i\x\t\7\2\7\4\9\1\6\z\r\t\f\9\2\r\v\9\o\v\h\q\g\4\v\i\n\5\f\l\3\z\e\l\b\t\o\g\h\j\d\p\q\o\s\g\8\m\r\w\i\a\q\p\3\a\0\z\5\x\4\7\z\z\b\1\a\k\m\e\i\u\j\7\j\1\b\r\p\k\n\d\8\z\g\p\z\o\m\8\4\f\q\g\d\a\8\n\f\b\y\4\o\f\j\g\f\9\5\o\6\g\7\x\f\7\v\i\p\e\6\r\c\9\1\p\b\1\1\q\u\9\h\e\b\p\o\s\c\8\i\h\z\4\8\2\f\o\c\r\o\0\b\m\3\6\7\q\d\r\k\b\4\j\y\7\b\q\d\f\n\2\8\n\1\0\z\m\8\9\4\y\a\9\x\w\w\q\8\6\a\7\d\l\2\4\4\4\g\h\v\d\3\v\h\v\w\x\s\o\q\c\x\s\6\0\u\o\a\e\p\5\q\b\s\n\6\c\q\x\i\p\4\d\j\0\9\e\k\x\5\1\c\r\l\e\u\z\w\v\a\4\r\c\d\k\4\o\q\b\s\t\l\a\l\u ]] 00:07:36.569 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.569 10:56:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:36.569 [2024-10-29 10:56:41.923747] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:36.569 [2024-10-29 10:56:41.923864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73580 ] 00:07:36.828 [2024-10-29 10:56:42.068782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.828 [2024-10-29 10:56:42.089794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.828 [2024-10-29 10:56:42.123053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.828  [2024-10-29T10:56:42.325Z] Copying: 512/512 [B] (average 500 kBps) 00:07:36.828 00:07:36.828 10:56:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cmxmzi7oeglvi6yth2i3bt24e6ectvw3g1waah5rlana4v06ti2vmcwrhq4le0v0sv1yzw74bonlunkuhbmgqqoy98wclmrtkt626kcne6ylw60qznqbp5jeha2uwshgcoe9bzbt21syj80ijat85iukgtrlnepg16xd6yb7pnfy4gdy5ylcnnluuu6f6p6ymkl7aiyg9vyy55x708e13j16umvsaexzcuwgx9twc0598k78b9ur9ixt7274916zrtf92rv9ovhqg4vin5fl3zelbtoghjdpqosg8mrwiaqp3a0z5x47zzb1akmeiuj7j1brpknd8zgpzom84fqgda8nfby4ofjgf95o6g7xf7vipe6rc91pb11qu9hebposc8ihz482focro0bm367qdrkb4jy7bqdfn28n10zm894ya9xwwq86a7dl2444ghvd3vhvwxsoqcxs60uoaep5qbsn6cqxip4dj09ekx51crleuzwva4rcdk4oqbstlalu == \c\m\x\m\z\i\7\o\e\g\l\v\i\6\y\t\h\2\i\3\b\t\2\4\e\6\e\c\t\v\w\3\g\1\w\a\a\h\5\r\l\a\n\a\4\v\0\6\t\i\2\v\m\c\w\r\h\q\4\l\e\0\v\0\s\v\1\y\z\w\7\4\b\o\n\l\u\n\k\u\h\b\m\g\q\q\o\y\9\8\w\c\l\m\r\t\k\t\6\2\6\k\c\n\e\6\y\l\w\6\0\q\z\n\q\b\p\5\j\e\h\a\2\u\w\s\h\g\c\o\e\9\b\z\b\t\2\1\s\y\j\8\0\i\j\a\t\8\5\i\u\k\g\t\r\l\n\e\p\g\1\6\x\d\6\y\b\7\p\n\f\y\4\g\d\y\5\y\l\c\n\n\l\u\u\u\6\f\6\p\6\y\m\k\l\7\a\i\y\g\9\v\y\y\5\5\x\7\0\8\e\1\3\j\1\6\u\m\v\s\a\e\x\z\c\u\w\g\x\9\t\w\c\0\5\9\8\k\7\8\b\9\u\r\9\i\x\t\7\2\7\4\9\1\6\z\r\t\f\9\2\r\v\9\o\v\h\q\g\4\v\i\n\5\f\l\3\z\e\l\b\t\o\g\h\j\d\p\q\o\s\g\8\m\r\w\i\a\q\p\3\a\0\z\5\x\4\7\z\z\b\1\a\k\m\e\i\u\j\7\j\1\b\r\p\k\n\d\8\z\g\p\z\o\m\8\4\f\q\g\d\a\8\n\f\b\y\4\o\f\j\g\f\9\5\o\6\g\7\x\f\7\v\i\p\e\6\r\c\9\1\p\b\1\1\q\u\9\h\e\b\p\o\s\c\8\i\h\z\4\8\2\f\o\c\r\o\0\b\m\3\6\7\q\d\r\k\b\4\j\y\7\b\q\d\f\n\2\8\n\1\0\z\m\8\9\4\y\a\9\x\w\w\q\8\6\a\7\d\l\2\4\4\4\g\h\v\d\3\v\h\v\w\x\s\o\q\c\x\s\6\0\u\o\a\e\p\5\q\b\s\n\6\c\q\x\i\p\4\d\j\0\9\e\k\x\5\1\c\r\l\e\u\z\w\v\a\4\r\c\d\k\4\o\q\b\s\t\l\a\l\u ]] 00:07:36.828 10:56:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:36.828 10:56:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:36.828 [2024-10-29 10:56:42.324794] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:36.828 [2024-10-29 10:56:42.324895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73593 ] 00:07:37.087 [2024-10-29 10:56:42.471084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.087 [2024-10-29 10:56:42.489803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.087 [2024-10-29 10:56:42.517644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.087  [2024-10-29T10:56:42.853Z] Copying: 512/512 [B] (average 166 kBps) 00:07:37.356 00:07:37.356 10:56:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cmxmzi7oeglvi6yth2i3bt24e6ectvw3g1waah5rlana4v06ti2vmcwrhq4le0v0sv1yzw74bonlunkuhbmgqqoy98wclmrtkt626kcne6ylw60qznqbp5jeha2uwshgcoe9bzbt21syj80ijat85iukgtrlnepg16xd6yb7pnfy4gdy5ylcnnluuu6f6p6ymkl7aiyg9vyy55x708e13j16umvsaexzcuwgx9twc0598k78b9ur9ixt7274916zrtf92rv9ovhqg4vin5fl3zelbtoghjdpqosg8mrwiaqp3a0z5x47zzb1akmeiuj7j1brpknd8zgpzom84fqgda8nfby4ofjgf95o6g7xf7vipe6rc91pb11qu9hebposc8ihz482focro0bm367qdrkb4jy7bqdfn28n10zm894ya9xwwq86a7dl2444ghvd3vhvwxsoqcxs60uoaep5qbsn6cqxip4dj09ekx51crleuzwva4rcdk4oqbstlalu == \c\m\x\m\z\i\7\o\e\g\l\v\i\6\y\t\h\2\i\3\b\t\2\4\e\6\e\c\t\v\w\3\g\1\w\a\a\h\5\r\l\a\n\a\4\v\0\6\t\i\2\v\m\c\w\r\h\q\4\l\e\0\v\0\s\v\1\y\z\w\7\4\b\o\n\l\u\n\k\u\h\b\m\g\q\q\o\y\9\8\w\c\l\m\r\t\k\t\6\2\6\k\c\n\e\6\y\l\w\6\0\q\z\n\q\b\p\5\j\e\h\a\2\u\w\s\h\g\c\o\e\9\b\z\b\t\2\1\s\y\j\8\0\i\j\a\t\8\5\i\u\k\g\t\r\l\n\e\p\g\1\6\x\d\6\y\b\7\p\n\f\y\4\g\d\y\5\y\l\c\n\n\l\u\u\u\6\f\6\p\6\y\m\k\l\7\a\i\y\g\9\v\y\y\5\5\x\7\0\8\e\1\3\j\1\6\u\m\v\s\a\e\x\z\c\u\w\g\x\9\t\w\c\0\5\9\8\k\7\8\b\9\u\r\9\i\x\t\7\2\7\4\9\1\6\z\r\t\f\9\2\r\v\9\o\v\h\q\g\4\v\i\n\5\f\l\3\z\e\l\b\t\o\g\h\j\d\p\q\o\s\g\8\m\r\w\i\a\q\p\3\a\0\z\5\x\4\7\z\z\b\1\a\k\m\e\i\u\j\7\j\1\b\r\p\k\n\d\8\z\g\p\z\o\m\8\4\f\q\g\d\a\8\n\f\b\y\4\o\f\j\g\f\9\5\o\6\g\7\x\f\7\v\i\p\e\6\r\c\9\1\p\b\1\1\q\u\9\h\e\b\p\o\s\c\8\i\h\z\4\8\2\f\o\c\r\o\0\b\m\3\6\7\q\d\r\k\b\4\j\y\7\b\q\d\f\n\2\8\n\1\0\z\m\8\9\4\y\a\9\x\w\w\q\8\6\a\7\d\l\2\4\4\4\g\h\v\d\3\v\h\v\w\x\s\o\q\c\x\s\6\0\u\o\a\e\p\5\q\b\s\n\6\c\q\x\i\p\4\d\j\0\9\e\k\x\5\1\c\r\l\e\u\z\w\v\a\4\r\c\d\k\4\o\q\b\s\t\l\a\l\u ]] 00:07:37.356 10:56:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:37.357 10:56:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:37.357 [2024-10-29 10:56:42.718261] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:37.357 [2024-10-29 10:56:42.718359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73595 ] 00:07:37.648 [2024-10-29 10:56:42.866423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.648 [2024-10-29 10:56:42.885060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.648 [2024-10-29 10:56:42.914444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.648  [2024-10-29T10:56:43.145Z] Copying: 512/512 [B] (average 500 kBps) 00:07:37.648 00:07:37.648 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ cmxmzi7oeglvi6yth2i3bt24e6ectvw3g1waah5rlana4v06ti2vmcwrhq4le0v0sv1yzw74bonlunkuhbmgqqoy98wclmrtkt626kcne6ylw60qznqbp5jeha2uwshgcoe9bzbt21syj80ijat85iukgtrlnepg16xd6yb7pnfy4gdy5ylcnnluuu6f6p6ymkl7aiyg9vyy55x708e13j16umvsaexzcuwgx9twc0598k78b9ur9ixt7274916zrtf92rv9ovhqg4vin5fl3zelbtoghjdpqosg8mrwiaqp3a0z5x47zzb1akmeiuj7j1brpknd8zgpzom84fqgda8nfby4ofjgf95o6g7xf7vipe6rc91pb11qu9hebposc8ihz482focro0bm367qdrkb4jy7bqdfn28n10zm894ya9xwwq86a7dl2444ghvd3vhvwxsoqcxs60uoaep5qbsn6cqxip4dj09ekx51crleuzwva4rcdk4oqbstlalu == \c\m\x\m\z\i\7\o\e\g\l\v\i\6\y\t\h\2\i\3\b\t\2\4\e\6\e\c\t\v\w\3\g\1\w\a\a\h\5\r\l\a\n\a\4\v\0\6\t\i\2\v\m\c\w\r\h\q\4\l\e\0\v\0\s\v\1\y\z\w\7\4\b\o\n\l\u\n\k\u\h\b\m\g\q\q\o\y\9\8\w\c\l\m\r\t\k\t\6\2\6\k\c\n\e\6\y\l\w\6\0\q\z\n\q\b\p\5\j\e\h\a\2\u\w\s\h\g\c\o\e\9\b\z\b\t\2\1\s\y\j\8\0\i\j\a\t\8\5\i\u\k\g\t\r\l\n\e\p\g\1\6\x\d\6\y\b\7\p\n\f\y\4\g\d\y\5\y\l\c\n\n\l\u\u\u\6\f\6\p\6\y\m\k\l\7\a\i\y\g\9\v\y\y\5\5\x\7\0\8\e\1\3\j\1\6\u\m\v\s\a\e\x\z\c\u\w\g\x\9\t\w\c\0\5\9\8\k\7\8\b\9\u\r\9\i\x\t\7\2\7\4\9\1\6\z\r\t\f\9\2\r\v\9\o\v\h\q\g\4\v\i\n\5\f\l\3\z\e\l\b\t\o\g\h\j\d\p\q\o\s\g\8\m\r\w\i\a\q\p\3\a\0\z\5\x\4\7\z\z\b\1\a\k\m\e\i\u\j\7\j\1\b\r\p\k\n\d\8\z\g\p\z\o\m\8\4\f\q\g\d\a\8\n\f\b\y\4\o\f\j\g\f\9\5\o\6\g\7\x\f\7\v\i\p\e\6\r\c\9\1\p\b\1\1\q\u\9\h\e\b\p\o\s\c\8\i\h\z\4\8\2\f\o\c\r\o\0\b\m\3\6\7\q\d\r\k\b\4\j\y\7\b\q\d\f\n\2\8\n\1\0\z\m\8\9\4\y\a\9\x\w\w\q\8\6\a\7\d\l\2\4\4\4\g\h\v\d\3\v\h\v\w\x\s\o\q\c\x\s\6\0\u\o\a\e\p\5\q\b\s\n\6\c\q\x\i\p\4\d\j\0\9\e\k\x\5\1\c\r\l\e\u\z\w\v\a\4\r\c\d\k\4\o\q\b\s\t\l\a\l\u ]] 00:07:37.648 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:37.648 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:37.648 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:37.648 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:37.648 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:37.648 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:37.648 [2024-10-29 10:56:43.132571] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:37.648 [2024-10-29 10:56:43.132662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73597 ] 00:07:37.907 [2024-10-29 10:56:43.283267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.907 [2024-10-29 10:56:43.306270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.907 [2024-10-29 10:56:43.335987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.907  [2024-10-29T10:56:43.663Z] Copying: 512/512 [B] (average 500 kBps) 00:07:38.166 00:07:38.166 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rijz8xq0ilm4k6cgosgzyb4u1xbvr1ucdw7mwllbfgdgdcva5uvnjv0qo43hxdjoj8ln7eszve7xdah4kxk19p3bad3wizeyfm75l7pedigw2hvvopezmdbavo0k3mmcwzyordjn5xvjypxjyg3r3n6v6utfzuxfub8aagwkhxi9ixiba1cj4bwg7o7lr3vnhigeb7g4ir4uvrvoq8ccpcbkf4gt8sb8x7rroespzzhwvxgs492d4rytkfhm7kh62xsztncoud3tknx36uthwcr9c72uxhv2p3aeby24rfgvt07pl2x5z0ecb7vzm7kaowrp56b4t2l0rxu53vjc3b0wjzdxlfavpblq6dablk5myjs88crelscs2v46e4xjaiwer19fyigcl5zk1opmprnhmm55frk8orwx2d2qkqor96tgfdgjg2d7dwtc14b9irai8n34p5aat53kof4trlyv1n01xfwmjkltrs3j5c60i15ozh3am1pod9qtk36q == \r\i\j\z\8\x\q\0\i\l\m\4\k\6\c\g\o\s\g\z\y\b\4\u\1\x\b\v\r\1\u\c\d\w\7\m\w\l\l\b\f\g\d\g\d\c\v\a\5\u\v\n\j\v\0\q\o\4\3\h\x\d\j\o\j\8\l\n\7\e\s\z\v\e\7\x\d\a\h\4\k\x\k\1\9\p\3\b\a\d\3\w\i\z\e\y\f\m\7\5\l\7\p\e\d\i\g\w\2\h\v\v\o\p\e\z\m\d\b\a\v\o\0\k\3\m\m\c\w\z\y\o\r\d\j\n\5\x\v\j\y\p\x\j\y\g\3\r\3\n\6\v\6\u\t\f\z\u\x\f\u\b\8\a\a\g\w\k\h\x\i\9\i\x\i\b\a\1\c\j\4\b\w\g\7\o\7\l\r\3\v\n\h\i\g\e\b\7\g\4\i\r\4\u\v\r\v\o\q\8\c\c\p\c\b\k\f\4\g\t\8\s\b\8\x\7\r\r\o\e\s\p\z\z\h\w\v\x\g\s\4\9\2\d\4\r\y\t\k\f\h\m\7\k\h\6\2\x\s\z\t\n\c\o\u\d\3\t\k\n\x\3\6\u\t\h\w\c\r\9\c\7\2\u\x\h\v\2\p\3\a\e\b\y\2\4\r\f\g\v\t\0\7\p\l\2\x\5\z\0\e\c\b\7\v\z\m\7\k\a\o\w\r\p\5\6\b\4\t\2\l\0\r\x\u\5\3\v\j\c\3\b\0\w\j\z\d\x\l\f\a\v\p\b\l\q\6\d\a\b\l\k\5\m\y\j\s\8\8\c\r\e\l\s\c\s\2\v\4\6\e\4\x\j\a\i\w\e\r\1\9\f\y\i\g\c\l\5\z\k\1\o\p\m\p\r\n\h\m\m\5\5\f\r\k\8\o\r\w\x\2\d\2\q\k\q\o\r\9\6\t\g\f\d\g\j\g\2\d\7\d\w\t\c\1\4\b\9\i\r\a\i\8\n\3\4\p\5\a\a\t\5\3\k\o\f\4\t\r\l\y\v\1\n\0\1\x\f\w\m\j\k\l\t\r\s\3\j\5\c\6\0\i\1\5\o\z\h\3\a\m\1\p\o\d\9\q\t\k\3\6\q ]] 00:07:38.166 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:38.166 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:38.166 [2024-10-29 10:56:43.543407] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:38.166 [2024-10-29 10:56:43.543543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73610 ] 00:07:38.426 [2024-10-29 10:56:43.689336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.426 [2024-10-29 10:56:43.708025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.426 [2024-10-29 10:56:43.737636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.426  [2024-10-29T10:56:43.923Z] Copying: 512/512 [B] (average 500 kBps) 00:07:38.426 00:07:38.426 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rijz8xq0ilm4k6cgosgzyb4u1xbvr1ucdw7mwllbfgdgdcva5uvnjv0qo43hxdjoj8ln7eszve7xdah4kxk19p3bad3wizeyfm75l7pedigw2hvvopezmdbavo0k3mmcwzyordjn5xvjypxjyg3r3n6v6utfzuxfub8aagwkhxi9ixiba1cj4bwg7o7lr3vnhigeb7g4ir4uvrvoq8ccpcbkf4gt8sb8x7rroespzzhwvxgs492d4rytkfhm7kh62xsztncoud3tknx36uthwcr9c72uxhv2p3aeby24rfgvt07pl2x5z0ecb7vzm7kaowrp56b4t2l0rxu53vjc3b0wjzdxlfavpblq6dablk5myjs88crelscs2v46e4xjaiwer19fyigcl5zk1opmprnhmm55frk8orwx2d2qkqor96tgfdgjg2d7dwtc14b9irai8n34p5aat53kof4trlyv1n01xfwmjkltrs3j5c60i15ozh3am1pod9qtk36q == \r\i\j\z\8\x\q\0\i\l\m\4\k\6\c\g\o\s\g\z\y\b\4\u\1\x\b\v\r\1\u\c\d\w\7\m\w\l\l\b\f\g\d\g\d\c\v\a\5\u\v\n\j\v\0\q\o\4\3\h\x\d\j\o\j\8\l\n\7\e\s\z\v\e\7\x\d\a\h\4\k\x\k\1\9\p\3\b\a\d\3\w\i\z\e\y\f\m\7\5\l\7\p\e\d\i\g\w\2\h\v\v\o\p\e\z\m\d\b\a\v\o\0\k\3\m\m\c\w\z\y\o\r\d\j\n\5\x\v\j\y\p\x\j\y\g\3\r\3\n\6\v\6\u\t\f\z\u\x\f\u\b\8\a\a\g\w\k\h\x\i\9\i\x\i\b\a\1\c\j\4\b\w\g\7\o\7\l\r\3\v\n\h\i\g\e\b\7\g\4\i\r\4\u\v\r\v\o\q\8\c\c\p\c\b\k\f\4\g\t\8\s\b\8\x\7\r\r\o\e\s\p\z\z\h\w\v\x\g\s\4\9\2\d\4\r\y\t\k\f\h\m\7\k\h\6\2\x\s\z\t\n\c\o\u\d\3\t\k\n\x\3\6\u\t\h\w\c\r\9\c\7\2\u\x\h\v\2\p\3\a\e\b\y\2\4\r\f\g\v\t\0\7\p\l\2\x\5\z\0\e\c\b\7\v\z\m\7\k\a\o\w\r\p\5\6\b\4\t\2\l\0\r\x\u\5\3\v\j\c\3\b\0\w\j\z\d\x\l\f\a\v\p\b\l\q\6\d\a\b\l\k\5\m\y\j\s\8\8\c\r\e\l\s\c\s\2\v\4\6\e\4\x\j\a\i\w\e\r\1\9\f\y\i\g\c\l\5\z\k\1\o\p\m\p\r\n\h\m\m\5\5\f\r\k\8\o\r\w\x\2\d\2\q\k\q\o\r\9\6\t\g\f\d\g\j\g\2\d\7\d\w\t\c\1\4\b\9\i\r\a\i\8\n\3\4\p\5\a\a\t\5\3\k\o\f\4\t\r\l\y\v\1\n\0\1\x\f\w\m\j\k\l\t\r\s\3\j\5\c\6\0\i\1\5\o\z\h\3\a\m\1\p\o\d\9\q\t\k\3\6\q ]] 00:07:38.426 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:38.426 10:56:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:38.686 [2024-10-29 10:56:43.940344] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:38.686 [2024-10-29 10:56:43.940461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73612 ] 00:07:38.686 [2024-10-29 10:56:44.085660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.686 [2024-10-29 10:56:44.105677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.686 [2024-10-29 10:56:44.135239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.686  [2024-10-29T10:56:44.443Z] Copying: 512/512 [B] (average 500 kBps) 00:07:38.946 00:07:38.946 10:56:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rijz8xq0ilm4k6cgosgzyb4u1xbvr1ucdw7mwllbfgdgdcva5uvnjv0qo43hxdjoj8ln7eszve7xdah4kxk19p3bad3wizeyfm75l7pedigw2hvvopezmdbavo0k3mmcwzyordjn5xvjypxjyg3r3n6v6utfzuxfub8aagwkhxi9ixiba1cj4bwg7o7lr3vnhigeb7g4ir4uvrvoq8ccpcbkf4gt8sb8x7rroespzzhwvxgs492d4rytkfhm7kh62xsztncoud3tknx36uthwcr9c72uxhv2p3aeby24rfgvt07pl2x5z0ecb7vzm7kaowrp56b4t2l0rxu53vjc3b0wjzdxlfavpblq6dablk5myjs88crelscs2v46e4xjaiwer19fyigcl5zk1opmprnhmm55frk8orwx2d2qkqor96tgfdgjg2d7dwtc14b9irai8n34p5aat53kof4trlyv1n01xfwmjkltrs3j5c60i15ozh3am1pod9qtk36q == \r\i\j\z\8\x\q\0\i\l\m\4\k\6\c\g\o\s\g\z\y\b\4\u\1\x\b\v\r\1\u\c\d\w\7\m\w\l\l\b\f\g\d\g\d\c\v\a\5\u\v\n\j\v\0\q\o\4\3\h\x\d\j\o\j\8\l\n\7\e\s\z\v\e\7\x\d\a\h\4\k\x\k\1\9\p\3\b\a\d\3\w\i\z\e\y\f\m\7\5\l\7\p\e\d\i\g\w\2\h\v\v\o\p\e\z\m\d\b\a\v\o\0\k\3\m\m\c\w\z\y\o\r\d\j\n\5\x\v\j\y\p\x\j\y\g\3\r\3\n\6\v\6\u\t\f\z\u\x\f\u\b\8\a\a\g\w\k\h\x\i\9\i\x\i\b\a\1\c\j\4\b\w\g\7\o\7\l\r\3\v\n\h\i\g\e\b\7\g\4\i\r\4\u\v\r\v\o\q\8\c\c\p\c\b\k\f\4\g\t\8\s\b\8\x\7\r\r\o\e\s\p\z\z\h\w\v\x\g\s\4\9\2\d\4\r\y\t\k\f\h\m\7\k\h\6\2\x\s\z\t\n\c\o\u\d\3\t\k\n\x\3\6\u\t\h\w\c\r\9\c\7\2\u\x\h\v\2\p\3\a\e\b\y\2\4\r\f\g\v\t\0\7\p\l\2\x\5\z\0\e\c\b\7\v\z\m\7\k\a\o\w\r\p\5\6\b\4\t\2\l\0\r\x\u\5\3\v\j\c\3\b\0\w\j\z\d\x\l\f\a\v\p\b\l\q\6\d\a\b\l\k\5\m\y\j\s\8\8\c\r\e\l\s\c\s\2\v\4\6\e\4\x\j\a\i\w\e\r\1\9\f\y\i\g\c\l\5\z\k\1\o\p\m\p\r\n\h\m\m\5\5\f\r\k\8\o\r\w\x\2\d\2\q\k\q\o\r\9\6\t\g\f\d\g\j\g\2\d\7\d\w\t\c\1\4\b\9\i\r\a\i\8\n\3\4\p\5\a\a\t\5\3\k\o\f\4\t\r\l\y\v\1\n\0\1\x\f\w\m\j\k\l\t\r\s\3\j\5\c\6\0\i\1\5\o\z\h\3\a\m\1\p\o\d\9\q\t\k\3\6\q ]] 00:07:38.946 10:56:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:38.946 10:56:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:38.946 [2024-10-29 10:56:44.342506] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:38.946 [2024-10-29 10:56:44.342605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73620 ] 00:07:39.206 [2024-10-29 10:56:44.488459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.206 [2024-10-29 10:56:44.506569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.206 [2024-10-29 10:56:44.533987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.206  [2024-10-29T10:56:44.703Z] Copying: 512/512 [B] (average 500 kBps) 00:07:39.206 00:07:39.206 10:56:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ rijz8xq0ilm4k6cgosgzyb4u1xbvr1ucdw7mwllbfgdgdcva5uvnjv0qo43hxdjoj8ln7eszve7xdah4kxk19p3bad3wizeyfm75l7pedigw2hvvopezmdbavo0k3mmcwzyordjn5xvjypxjyg3r3n6v6utfzuxfub8aagwkhxi9ixiba1cj4bwg7o7lr3vnhigeb7g4ir4uvrvoq8ccpcbkf4gt8sb8x7rroespzzhwvxgs492d4rytkfhm7kh62xsztncoud3tknx36uthwcr9c72uxhv2p3aeby24rfgvt07pl2x5z0ecb7vzm7kaowrp56b4t2l0rxu53vjc3b0wjzdxlfavpblq6dablk5myjs88crelscs2v46e4xjaiwer19fyigcl5zk1opmprnhmm55frk8orwx2d2qkqor96tgfdgjg2d7dwtc14b9irai8n34p5aat53kof4trlyv1n01xfwmjkltrs3j5c60i15ozh3am1pod9qtk36q == \r\i\j\z\8\x\q\0\i\l\m\4\k\6\c\g\o\s\g\z\y\b\4\u\1\x\b\v\r\1\u\c\d\w\7\m\w\l\l\b\f\g\d\g\d\c\v\a\5\u\v\n\j\v\0\q\o\4\3\h\x\d\j\o\j\8\l\n\7\e\s\z\v\e\7\x\d\a\h\4\k\x\k\1\9\p\3\b\a\d\3\w\i\z\e\y\f\m\7\5\l\7\p\e\d\i\g\w\2\h\v\v\o\p\e\z\m\d\b\a\v\o\0\k\3\m\m\c\w\z\y\o\r\d\j\n\5\x\v\j\y\p\x\j\y\g\3\r\3\n\6\v\6\u\t\f\z\u\x\f\u\b\8\a\a\g\w\k\h\x\i\9\i\x\i\b\a\1\c\j\4\b\w\g\7\o\7\l\r\3\v\n\h\i\g\e\b\7\g\4\i\r\4\u\v\r\v\o\q\8\c\c\p\c\b\k\f\4\g\t\8\s\b\8\x\7\r\r\o\e\s\p\z\z\h\w\v\x\g\s\4\9\2\d\4\r\y\t\k\f\h\m\7\k\h\6\2\x\s\z\t\n\c\o\u\d\3\t\k\n\x\3\6\u\t\h\w\c\r\9\c\7\2\u\x\h\v\2\p\3\a\e\b\y\2\4\r\f\g\v\t\0\7\p\l\2\x\5\z\0\e\c\b\7\v\z\m\7\k\a\o\w\r\p\5\6\b\4\t\2\l\0\r\x\u\5\3\v\j\c\3\b\0\w\j\z\d\x\l\f\a\v\p\b\l\q\6\d\a\b\l\k\5\m\y\j\s\8\8\c\r\e\l\s\c\s\2\v\4\6\e\4\x\j\a\i\w\e\r\1\9\f\y\i\g\c\l\5\z\k\1\o\p\m\p\r\n\h\m\m\5\5\f\r\k\8\o\r\w\x\2\d\2\q\k\q\o\r\9\6\t\g\f\d\g\j\g\2\d\7\d\w\t\c\1\4\b\9\i\r\a\i\8\n\3\4\p\5\a\a\t\5\3\k\o\f\4\t\r\l\y\v\1\n\0\1\x\f\w\m\j\k\l\t\r\s\3\j\5\c\6\0\i\1\5\o\z\h\3\a\m\1\p\o\d\9\q\t\k\3\6\q ]] 00:07:39.206 00:07:39.206 real 0m3.197s 00:07:39.206 user 0m1.470s 00:07:39.206 sys 0m0.745s 00:07:39.206 10:56:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:39.206 10:56:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:39.206 ************************************ 00:07:39.206 END TEST dd_flags_misc_forced_aio 00:07:39.206 ************************************ 00:07:39.466 10:56:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:39.466 10:56:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:39.466 10:56:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:39.466 00:07:39.466 real 0m15.306s 00:07:39.466 user 0m6.277s 00:07:39.466 sys 0m4.339s 00:07:39.466 10:56:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:39.466 ************************************ 00:07:39.466 END TEST spdk_dd_posix 00:07:39.466 ************************************ 00:07:39.466 10:56:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:39.466 10:56:44 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:39.466 10:56:44 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:39.466 10:56:44 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:39.466 10:56:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:39.466 ************************************ 00:07:39.466 START TEST spdk_dd_malloc 00:07:39.466 ************************************ 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:39.466 * Looking for test storage... 00:07:39.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.466 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:39.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.466 --rc genhtml_branch_coverage=1 00:07:39.466 --rc genhtml_function_coverage=1 00:07:39.466 --rc genhtml_legend=1 00:07:39.466 --rc geninfo_all_blocks=1 00:07:39.467 --rc geninfo_unexecuted_blocks=1 00:07:39.467 00:07:39.467 ' 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:39.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.467 --rc genhtml_branch_coverage=1 00:07:39.467 --rc genhtml_function_coverage=1 00:07:39.467 --rc genhtml_legend=1 00:07:39.467 --rc geninfo_all_blocks=1 00:07:39.467 --rc geninfo_unexecuted_blocks=1 00:07:39.467 00:07:39.467 ' 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:39.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.467 --rc genhtml_branch_coverage=1 00:07:39.467 --rc genhtml_function_coverage=1 00:07:39.467 --rc genhtml_legend=1 00:07:39.467 --rc geninfo_all_blocks=1 00:07:39.467 --rc geninfo_unexecuted_blocks=1 00:07:39.467 00:07:39.467 ' 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:39.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.467 --rc genhtml_branch_coverage=1 00:07:39.467 --rc genhtml_function_coverage=1 00:07:39.467 --rc genhtml_legend=1 00:07:39.467 --rc geninfo_all_blocks=1 00:07:39.467 --rc geninfo_unexecuted_blocks=1 00:07:39.467 00:07:39.467 ' 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:39.467 10:56:44 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:39.726 ************************************ 00:07:39.726 START TEST dd_malloc_copy 00:07:39.726 ************************************ 00:07:39.726 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:07:39.726 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:39.726 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:39.726 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:39.727 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:39.727 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:39.727 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:39.727 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:39.727 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:39.727 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:39.727 10:56:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:39.727 [2024-10-29 10:56:45.028461] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:39.727 [2024-10-29 10:56:45.028554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73696 ] 00:07:39.727 { 00:07:39.727 "subsystems": [ 00:07:39.727 { 00:07:39.727 "subsystem": "bdev", 00:07:39.727 "config": [ 00:07:39.727 { 00:07:39.727 "params": { 00:07:39.727 "block_size": 512, 00:07:39.727 "num_blocks": 1048576, 00:07:39.727 "name": "malloc0" 00:07:39.727 }, 00:07:39.727 "method": "bdev_malloc_create" 00:07:39.727 }, 00:07:39.727 { 00:07:39.727 "params": { 00:07:39.727 "block_size": 512, 00:07:39.727 "num_blocks": 1048576, 00:07:39.727 "name": "malloc1" 00:07:39.727 }, 00:07:39.727 "method": "bdev_malloc_create" 00:07:39.727 }, 00:07:39.727 { 00:07:39.727 "method": "bdev_wait_for_examine" 00:07:39.727 } 00:07:39.727 ] 00:07:39.727 } 00:07:39.727 ] 00:07:39.727 } 00:07:39.727 [2024-10-29 10:56:45.177024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.727 [2024-10-29 10:56:45.197283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.986 [2024-10-29 10:56:45.227689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.365  [2024-10-29T10:56:47.565Z] Copying: 233/512 [MB] (233 MBps) [2024-10-29T10:56:47.824Z] Copying: 466/512 [MB] (232 MBps) [2024-10-29T10:56:48.083Z] Copying: 512/512 [MB] (average 232 MBps) 00:07:42.586 00:07:42.586 10:56:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:42.586 10:56:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:42.586 10:56:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:42.586 10:56:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:42.586 [2024-10-29 10:56:48.002809] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:42.586 [2024-10-29 10:56:48.002916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73738 ] 00:07:42.586 { 00:07:42.586 "subsystems": [ 00:07:42.586 { 00:07:42.586 "subsystem": "bdev", 00:07:42.586 "config": [ 00:07:42.586 { 00:07:42.586 "params": { 00:07:42.586 "block_size": 512, 00:07:42.586 "num_blocks": 1048576, 00:07:42.586 "name": "malloc0" 00:07:42.586 }, 00:07:42.586 "method": "bdev_malloc_create" 00:07:42.586 }, 00:07:42.586 { 00:07:42.586 "params": { 00:07:42.586 "block_size": 512, 00:07:42.586 "num_blocks": 1048576, 00:07:42.586 "name": "malloc1" 00:07:42.586 }, 00:07:42.586 "method": "bdev_malloc_create" 00:07:42.586 }, 00:07:42.586 { 00:07:42.587 "method": "bdev_wait_for_examine" 00:07:42.587 } 00:07:42.587 ] 00:07:42.587 } 00:07:42.587 ] 00:07:42.587 } 00:07:42.846 [2024-10-29 10:56:48.146995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.846 [2024-10-29 10:56:48.165451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.846 [2024-10-29 10:56:48.193938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.224  [2024-10-29T10:56:50.656Z] Copying: 235/512 [MB] (235 MBps) [2024-10-29T10:56:50.656Z] Copying: 460/512 [MB] (224 MBps) [2024-10-29T10:56:51.224Z] Copying: 512/512 [MB] (average 230 MBps) 00:07:45.727 00:07:45.727 00:07:45.727 real 0m5.962s 00:07:45.727 user 0m5.316s 00:07:45.727 sys 0m0.499s 00:07:45.727 10:56:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.727 10:56:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.727 ************************************ 00:07:45.727 END TEST dd_malloc_copy 00:07:45.727 ************************************ 00:07:45.727 00:07:45.727 real 0m6.202s 00:07:45.727 user 0m5.439s 00:07:45.727 sys 0m0.620s 00:07:45.727 10:56:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.727 10:56:50 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:45.727 ************************************ 00:07:45.727 END TEST spdk_dd_malloc 00:07:45.727 ************************************ 00:07:45.727 10:56:51 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:45.727 10:56:51 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:45.727 10:56:51 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.727 10:56:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:45.727 ************************************ 00:07:45.727 START TEST spdk_dd_bdev_to_bdev 00:07:45.727 ************************************ 00:07:45.727 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:45.727 * Looking for test storage... 00:07:45.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:45.727 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:45.727 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:07:45.727 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:45.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.987 --rc genhtml_branch_coverage=1 00:07:45.987 --rc genhtml_function_coverage=1 00:07:45.987 --rc genhtml_legend=1 00:07:45.987 --rc geninfo_all_blocks=1 00:07:45.987 --rc geninfo_unexecuted_blocks=1 00:07:45.987 00:07:45.987 ' 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:45.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.987 --rc genhtml_branch_coverage=1 00:07:45.987 --rc genhtml_function_coverage=1 00:07:45.987 --rc genhtml_legend=1 00:07:45.987 --rc geninfo_all_blocks=1 00:07:45.987 --rc geninfo_unexecuted_blocks=1 00:07:45.987 00:07:45.987 ' 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:45.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.987 --rc genhtml_branch_coverage=1 00:07:45.987 --rc genhtml_function_coverage=1 00:07:45.987 --rc genhtml_legend=1 00:07:45.987 --rc geninfo_all_blocks=1 00:07:45.987 --rc geninfo_unexecuted_blocks=1 00:07:45.987 00:07:45.987 ' 00:07:45.987 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:45.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.987 --rc genhtml_branch_coverage=1 00:07:45.987 --rc genhtml_function_coverage=1 00:07:45.987 --rc genhtml_legend=1 00:07:45.987 --rc geninfo_all_blocks=1 00:07:45.987 --rc geninfo_unexecuted_blocks=1 00:07:45.987 00:07:45.987 ' 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:45.988 ************************************ 00:07:45.988 START TEST dd_inflate_file 00:07:45.988 ************************************ 00:07:45.988 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:45.988 [2024-10-29 10:56:51.321025] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:45.988 [2024-10-29 10:56:51.321138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73845 ] 00:07:45.988 [2024-10-29 10:56:51.475616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.248 [2024-10-29 10:56:51.501615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.248 [2024-10-29 10:56:51.538441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.248  [2024-10-29T10:56:51.745Z] Copying: 64/64 [MB] (average 1600 MBps) 00:07:46.248 00:07:46.248 00:07:46.248 real 0m0.477s 00:07:46.248 user 0m0.255s 00:07:46.248 sys 0m0.247s 00:07:46.248 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.248 ************************************ 00:07:46.248 END TEST dd_inflate_file 00:07:46.248 ************************************ 00:07:46.248 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:46.507 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:46.507 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:46.507 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:46.507 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:46.507 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:46.507 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:46.507 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.507 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:46.507 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:46.507 ************************************ 00:07:46.507 START TEST dd_copy_to_out_bdev 00:07:46.507 ************************************ 00:07:46.507 10:56:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:46.507 { 00:07:46.507 "subsystems": [ 00:07:46.507 { 00:07:46.507 "subsystem": "bdev", 00:07:46.507 "config": [ 00:07:46.507 { 00:07:46.507 "params": { 00:07:46.507 "trtype": "pcie", 00:07:46.507 "traddr": "0000:00:10.0", 00:07:46.507 "name": "Nvme0" 00:07:46.507 }, 00:07:46.507 "method": "bdev_nvme_attach_controller" 00:07:46.507 }, 00:07:46.507 { 00:07:46.507 "params": { 00:07:46.507 "trtype": "pcie", 00:07:46.507 "traddr": "0000:00:11.0", 00:07:46.507 "name": "Nvme1" 00:07:46.507 }, 00:07:46.507 "method": "bdev_nvme_attach_controller" 00:07:46.507 }, 00:07:46.507 { 00:07:46.507 "method": "bdev_wait_for_examine" 00:07:46.507 } 00:07:46.507 ] 00:07:46.507 } 00:07:46.507 ] 00:07:46.507 } 00:07:46.507 [2024-10-29 10:56:51.850285] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:46.507 [2024-10-29 10:56:51.850395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73886 ] 00:07:46.507 [2024-10-29 10:56:52.003895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.767 [2024-10-29 10:56:52.028542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.767 [2024-10-29 10:56:52.064238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.149  [2024-10-29T10:56:53.646Z] Copying: 49/64 [MB] (49 MBps) [2024-10-29T10:56:53.907Z] Copying: 64/64 [MB] (average 48 MBps) 00:07:48.410 00:07:48.410 00:07:48.410 real 0m1.889s 00:07:48.410 user 0m1.700s 00:07:48.410 sys 0m1.553s 00:07:48.410 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:48.410 ************************************ 00:07:48.410 END TEST dd_copy_to_out_bdev 00:07:48.410 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:48.410 ************************************ 00:07:48.410 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:48.410 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:48.410 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:48.410 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:48.410 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:48.410 ************************************ 00:07:48.410 START TEST dd_offset_magic 00:07:48.411 ************************************ 00:07:48.411 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:07:48.411 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:48.411 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:48.411 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:48.411 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:48.411 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:48.411 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:48.411 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:48.411 10:56:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:48.411 { 00:07:48.411 "subsystems": [ 00:07:48.411 { 00:07:48.411 "subsystem": "bdev", 00:07:48.411 "config": [ 00:07:48.411 { 00:07:48.411 "params": { 00:07:48.411 "trtype": "pcie", 00:07:48.411 "traddr": "0000:00:10.0", 00:07:48.411 "name": "Nvme0" 00:07:48.411 }, 00:07:48.411 "method": "bdev_nvme_attach_controller" 00:07:48.411 }, 00:07:48.411 { 00:07:48.411 "params": { 00:07:48.411 "trtype": "pcie", 00:07:48.411 "traddr": "0000:00:11.0", 00:07:48.411 "name": "Nvme1" 00:07:48.411 }, 00:07:48.411 "method": "bdev_nvme_attach_controller" 00:07:48.411 }, 00:07:48.411 { 00:07:48.411 "method": "bdev_wait_for_examine" 00:07:48.411 } 00:07:48.411 ] 00:07:48.411 } 00:07:48.411 ] 00:07:48.411 } 00:07:48.411 [2024-10-29 10:56:53.802415] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:48.411 [2024-10-29 10:56:53.802538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73924 ] 00:07:48.668 [2024-10-29 10:56:53.947896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.668 [2024-10-29 10:56:53.967899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.668 [2024-10-29 10:56:53.996885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.927  [2024-10-29T10:56:54.424Z] Copying: 65/65 [MB] (average 970 MBps) 00:07:48.927 00:07:48.927 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:48.927 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:48.927 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:48.927 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:49.185 [2024-10-29 10:56:54.437927] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:49.185 [2024-10-29 10:56:54.438490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73944 ] 00:07:49.185 { 00:07:49.185 "subsystems": [ 00:07:49.185 { 00:07:49.185 "subsystem": "bdev", 00:07:49.185 "config": [ 00:07:49.185 { 00:07:49.185 "params": { 00:07:49.185 "trtype": "pcie", 00:07:49.185 "traddr": "0000:00:10.0", 00:07:49.185 "name": "Nvme0" 00:07:49.185 }, 00:07:49.185 "method": "bdev_nvme_attach_controller" 00:07:49.185 }, 00:07:49.185 { 00:07:49.185 "params": { 00:07:49.185 "trtype": "pcie", 00:07:49.185 "traddr": "0000:00:11.0", 00:07:49.185 "name": "Nvme1" 00:07:49.185 }, 00:07:49.185 "method": "bdev_nvme_attach_controller" 00:07:49.185 }, 00:07:49.185 { 00:07:49.185 "method": "bdev_wait_for_examine" 00:07:49.185 } 00:07:49.185 ] 00:07:49.185 } 00:07:49.185 ] 00:07:49.185 } 00:07:49.185 [2024-10-29 10:56:54.577799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.185 [2024-10-29 10:56:54.596825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.185 [2024-10-29 10:56:54.624647] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.443  [2024-10-29T10:56:54.940Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:49.443 00:07:49.443 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:49.443 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:49.443 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:49.443 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:49.443 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:49.443 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:49.443 10:56:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:49.702 { 00:07:49.702 "subsystems": [ 00:07:49.702 { 00:07:49.702 "subsystem": "bdev", 00:07:49.702 "config": [ 00:07:49.702 { 00:07:49.702 "params": { 00:07:49.702 "trtype": "pcie", 00:07:49.702 "traddr": "0000:00:10.0", 00:07:49.702 "name": "Nvme0" 00:07:49.702 }, 00:07:49.702 "method": "bdev_nvme_attach_controller" 00:07:49.702 }, 00:07:49.702 { 00:07:49.702 "params": { 00:07:49.702 "trtype": "pcie", 00:07:49.702 "traddr": "0000:00:11.0", 00:07:49.702 "name": "Nvme1" 00:07:49.702 }, 00:07:49.702 "method": "bdev_nvme_attach_controller" 00:07:49.702 }, 00:07:49.702 { 00:07:49.702 "method": "bdev_wait_for_examine" 00:07:49.702 } 00:07:49.702 ] 00:07:49.702 } 00:07:49.702 ] 00:07:49.702 } 00:07:49.702 [2024-10-29 10:56:54.965077] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:49.702 [2024-10-29 10:56:54.965210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73960 ] 00:07:49.702 [2024-10-29 10:56:55.112813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.702 [2024-10-29 10:56:55.132862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.702 [2024-10-29 10:56:55.162102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.961  [2024-10-29T10:56:55.717Z] Copying: 65/65 [MB] (average 1031 MBps) 00:07:50.220 00:07:50.220 10:56:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:50.220 10:56:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:50.220 10:56:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:50.220 10:56:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:50.220 { 00:07:50.220 "subsystems": [ 00:07:50.220 { 00:07:50.220 "subsystem": "bdev", 00:07:50.220 "config": [ 00:07:50.220 { 00:07:50.220 "params": { 00:07:50.220 "trtype": "pcie", 00:07:50.220 "traddr": "0000:00:10.0", 00:07:50.220 "name": "Nvme0" 00:07:50.220 }, 00:07:50.220 "method": "bdev_nvme_attach_controller" 00:07:50.220 }, 00:07:50.220 { 00:07:50.220 "params": { 00:07:50.220 "trtype": "pcie", 00:07:50.220 "traddr": "0000:00:11.0", 00:07:50.220 "name": "Nvme1" 00:07:50.220 }, 00:07:50.220 "method": "bdev_nvme_attach_controller" 00:07:50.220 }, 00:07:50.220 { 00:07:50.220 "method": "bdev_wait_for_examine" 00:07:50.220 } 00:07:50.220 ] 00:07:50.220 } 00:07:50.220 ] 00:07:50.220 } 00:07:50.220 [2024-10-29 10:56:55.607975] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:50.220 [2024-10-29 10:56:55.608123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73975 ] 00:07:50.478 [2024-10-29 10:56:55.754942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.478 [2024-10-29 10:56:55.774958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.478 [2024-10-29 10:56:55.804324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.478  [2024-10-29T10:56:56.234Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:50.737 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:50.737 00:07:50.737 real 0m2.343s 00:07:50.737 user 0m1.687s 00:07:50.737 sys 0m0.626s 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:50.737 ************************************ 00:07:50.737 END TEST dd_offset_magic 00:07:50.737 ************************************ 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:50.737 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:50.737 [2024-10-29 10:56:56.180802] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:50.737 [2024-10-29 10:56:56.180919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74006 ] 00:07:50.737 { 00:07:50.737 "subsystems": [ 00:07:50.737 { 00:07:50.737 "subsystem": "bdev", 00:07:50.737 "config": [ 00:07:50.737 { 00:07:50.737 "params": { 00:07:50.737 "trtype": "pcie", 00:07:50.737 "traddr": "0000:00:10.0", 00:07:50.737 "name": "Nvme0" 00:07:50.737 }, 00:07:50.737 "method": "bdev_nvme_attach_controller" 00:07:50.737 }, 00:07:50.737 { 00:07:50.737 "params": { 00:07:50.737 "trtype": "pcie", 00:07:50.738 "traddr": "0000:00:11.0", 00:07:50.738 "name": "Nvme1" 00:07:50.738 }, 00:07:50.738 "method": "bdev_nvme_attach_controller" 00:07:50.738 }, 00:07:50.738 { 00:07:50.738 "method": "bdev_wait_for_examine" 00:07:50.738 } 00:07:50.738 ] 00:07:50.738 } 00:07:50.738 ] 00:07:50.738 } 00:07:50.996 [2024-10-29 10:56:56.320278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.996 [2024-10-29 10:56:56.340823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.996 [2024-10-29 10:56:56.369989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.254  [2024-10-29T10:56:56.751Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:07:51.254 00:07:51.254 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:51.254 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:51.254 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:51.254 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:51.254 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:51.254 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:51.254 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:51.254 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:51.254 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:51.254 10:56:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:51.254 { 00:07:51.254 "subsystems": [ 00:07:51.254 { 00:07:51.254 "subsystem": "bdev", 00:07:51.254 "config": [ 00:07:51.254 { 00:07:51.254 "params": { 00:07:51.254 "trtype": "pcie", 00:07:51.254 "traddr": "0000:00:10.0", 00:07:51.254 "name": "Nvme0" 00:07:51.254 }, 00:07:51.254 "method": "bdev_nvme_attach_controller" 00:07:51.254 }, 00:07:51.254 { 00:07:51.254 "params": { 00:07:51.254 "trtype": "pcie", 00:07:51.254 "traddr": "0000:00:11.0", 00:07:51.254 "name": "Nvme1" 00:07:51.254 }, 00:07:51.254 "method": "bdev_nvme_attach_controller" 00:07:51.254 }, 00:07:51.254 { 00:07:51.254 "method": "bdev_wait_for_examine" 00:07:51.254 } 00:07:51.254 ] 00:07:51.254 } 00:07:51.254 ] 00:07:51.254 } 00:07:51.254 [2024-10-29 10:56:56.716270] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:51.254 [2024-10-29 10:56:56.716585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74022 ] 00:07:51.513 [2024-10-29 10:56:56.863120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.513 [2024-10-29 10:56:56.883466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.513 [2024-10-29 10:56:56.914230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.772  [2024-10-29T10:56:57.269Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:07:51.772 00:07:51.772 10:56:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:51.772 ************************************ 00:07:51.772 END TEST spdk_dd_bdev_to_bdev 00:07:51.772 ************************************ 00:07:51.772 00:07:51.772 real 0m6.175s 00:07:51.772 user 0m4.597s 00:07:51.772 sys 0m2.958s 00:07:51.772 10:56:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.772 10:56:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:51.772 10:56:57 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:51.772 10:56:57 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:51.772 10:56:57 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.772 10:56:57 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.772 10:56:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:51.772 ************************************ 00:07:51.772 START TEST spdk_dd_uring 00:07:51.772 ************************************ 00:07:51.772 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:52.030 * Looking for test storage... 00:07:52.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:52.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.031 --rc genhtml_branch_coverage=1 00:07:52.031 --rc genhtml_function_coverage=1 00:07:52.031 --rc genhtml_legend=1 00:07:52.031 --rc geninfo_all_blocks=1 00:07:52.031 --rc geninfo_unexecuted_blocks=1 00:07:52.031 00:07:52.031 ' 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:52.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.031 --rc genhtml_branch_coverage=1 00:07:52.031 --rc genhtml_function_coverage=1 00:07:52.031 --rc genhtml_legend=1 00:07:52.031 --rc geninfo_all_blocks=1 00:07:52.031 --rc geninfo_unexecuted_blocks=1 00:07:52.031 00:07:52.031 ' 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:52.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.031 --rc genhtml_branch_coverage=1 00:07:52.031 --rc genhtml_function_coverage=1 00:07:52.031 --rc genhtml_legend=1 00:07:52.031 --rc geninfo_all_blocks=1 00:07:52.031 --rc geninfo_unexecuted_blocks=1 00:07:52.031 00:07:52.031 ' 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:52.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.031 --rc genhtml_branch_coverage=1 00:07:52.031 --rc genhtml_function_coverage=1 00:07:52.031 --rc genhtml_legend=1 00:07:52.031 --rc geninfo_all_blocks=1 00:07:52.031 --rc geninfo_unexecuted_blocks=1 00:07:52.031 00:07:52.031 ' 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:52.031 ************************************ 00:07:52.031 START TEST dd_uring_copy 00:07:52.031 ************************************ 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:52.031 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=nyv0hnk3dk3p568g44ps1x2b9ra4x9jc39n7sruylfvc2xokq89vq5kciv5gue6irqrsrz7303ti66n1gl8jwd1r7imhpmwg3brbjii3wwxf8pccg4fya9srpvisppchufnl5x4v30pvqrplrm6qob6jn2l5y1tbgjsiftkszj9jmiq9z1wejca7zo7ttd9mgsjavsug0rw01ddqe24kf74wpy55a81hi49r3ps7l66qxwci02d35fqp8txbd1yoxyurowrpfyuzfinevdyaa12k427m3fxs724ujlmnzyfi4nt9hve0mu9hlrujtjq0z4ihif3yx9wrwfsvc270oy24bbw0pfumaytvdgrm83m5acbe5gboake74j8sgc9ns6tapsn3s3mhzijbr3z3lya0e4dch66fxcw6nskapp5lfx0wxgbum7dm62bryh9bldueryat27h2yje9zv2g7ol4io4djxdx7onloqgn0mh19e59xomybfhps6sw4jj047ypge0z5sjnxoiioef0vz8t2bcpu1ljv3nrx28h12mdvd593iw8piyodjq6mm5s8pxim9cvnchbngeyr9ad8ci58b3t2x4327i4c7l8jl8h79mravt5sb0jtl8evccbt601c99niykat28jj8wk049xlmuz42odzhh82ys47u9onio5fcn33wtj50yvg6m2esnnzk1ath2x46gyvqyxoqwyti6tdkfsw9n5sula77v42k9lhowvaxx8ei0eotv1npj928u6cjokpb25doc5j7ht1boi444cmifb5yf5v4goq870yy9l4zi2dctte3kinw44i0w9eu7f25oxpyleva84t3ukx5elydnw9qyu09yny0jd1fch59nm6qms8ie2cb5s6cy615657zusvxvj6qfijc17o4gw7ck5avqq18r9ej1jrb1zb6eesv44bxwhkoxtrixnoad0znji4m8md4bfs1drqywsbsmutg1j79l1cpw6rpga1sjg2sgxhlv3 00:07:52.032 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo nyv0hnk3dk3p568g44ps1x2b9ra4x9jc39n7sruylfvc2xokq89vq5kciv5gue6irqrsrz7303ti66n1gl8jwd1r7imhpmwg3brbjii3wwxf8pccg4fya9srpvisppchufnl5x4v30pvqrplrm6qob6jn2l5y1tbgjsiftkszj9jmiq9z1wejca7zo7ttd9mgsjavsug0rw01ddqe24kf74wpy55a81hi49r3ps7l66qxwci02d35fqp8txbd1yoxyurowrpfyuzfinevdyaa12k427m3fxs724ujlmnzyfi4nt9hve0mu9hlrujtjq0z4ihif3yx9wrwfsvc270oy24bbw0pfumaytvdgrm83m5acbe5gboake74j8sgc9ns6tapsn3s3mhzijbr3z3lya0e4dch66fxcw6nskapp5lfx0wxgbum7dm62bryh9bldueryat27h2yje9zv2g7ol4io4djxdx7onloqgn0mh19e59xomybfhps6sw4jj047ypge0z5sjnxoiioef0vz8t2bcpu1ljv3nrx28h12mdvd593iw8piyodjq6mm5s8pxim9cvnchbngeyr9ad8ci58b3t2x4327i4c7l8jl8h79mravt5sb0jtl8evccbt601c99niykat28jj8wk049xlmuz42odzhh82ys47u9onio5fcn33wtj50yvg6m2esnnzk1ath2x46gyvqyxoqwyti6tdkfsw9n5sula77v42k9lhowvaxx8ei0eotv1npj928u6cjokpb25doc5j7ht1boi444cmifb5yf5v4goq870yy9l4zi2dctte3kinw44i0w9eu7f25oxpyleva84t3ukx5elydnw9qyu09yny0jd1fch59nm6qms8ie2cb5s6cy615657zusvxvj6qfijc17o4gw7ck5avqq18r9ej1jrb1zb6eesv44bxwhkoxtrixnoad0znji4m8md4bfs1drqywsbsmutg1j79l1cpw6rpga1sjg2sgxhlv3 00:07:52.032 10:56:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:52.290 [2024-10-29 10:56:57.557220] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:52.290 [2024-10-29 10:56:57.557619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74100 ] 00:07:52.290 [2024-10-29 10:56:57.703127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.290 [2024-10-29 10:56:57.723294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.290 [2024-10-29 10:56:57.751684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.857  [2024-10-29T10:56:58.613Z] Copying: 511/511 [MB] (average 1651 MBps) 00:07:53.116 00:07:53.116 10:56:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:53.116 10:56:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:53.116 10:56:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:53.116 10:56:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:53.116 [2024-10-29 10:56:58.457182] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:53.116 [2024-10-29 10:56:58.457263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74110 ] 00:07:53.116 { 00:07:53.116 "subsystems": [ 00:07:53.116 { 00:07:53.116 "subsystem": "bdev", 00:07:53.116 "config": [ 00:07:53.116 { 00:07:53.116 "params": { 00:07:53.116 "block_size": 512, 00:07:53.116 "num_blocks": 1048576, 00:07:53.116 "name": "malloc0" 00:07:53.116 }, 00:07:53.116 "method": "bdev_malloc_create" 00:07:53.116 }, 00:07:53.116 { 00:07:53.116 "params": { 00:07:53.116 "filename": "/dev/zram1", 00:07:53.116 "name": "uring0" 00:07:53.116 }, 00:07:53.116 "method": "bdev_uring_create" 00:07:53.116 }, 00:07:53.116 { 00:07:53.116 "method": "bdev_wait_for_examine" 00:07:53.116 } 00:07:53.116 ] 00:07:53.116 } 00:07:53.116 ] 00:07:53.116 } 00:07:53.116 [2024-10-29 10:56:58.598999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.374 [2024-10-29 10:56:58.619563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.374 [2024-10-29 10:56:58.649014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.310  [2024-10-29T10:57:01.184Z] Copying: 246/512 [MB] (246 MBps) [2024-10-29T10:57:01.184Z] Copying: 485/512 [MB] (239 MBps) [2024-10-29T10:57:01.184Z] Copying: 512/512 [MB] (average 243 MBps) 00:07:55.687 00:07:55.687 10:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:55.687 10:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:55.687 10:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:55.687 10:57:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:55.687 { 00:07:55.687 "subsystems": [ 00:07:55.687 { 00:07:55.687 "subsystem": "bdev", 00:07:55.687 "config": [ 00:07:55.687 { 00:07:55.687 "params": { 00:07:55.687 "block_size": 512, 00:07:55.687 "num_blocks": 1048576, 00:07:55.687 "name": "malloc0" 00:07:55.687 }, 00:07:55.687 "method": "bdev_malloc_create" 00:07:55.687 }, 00:07:55.687 { 00:07:55.687 "params": { 00:07:55.687 "filename": "/dev/zram1", 00:07:55.687 "name": "uring0" 00:07:55.687 }, 00:07:55.687 "method": "bdev_uring_create" 00:07:55.687 }, 00:07:55.687 { 00:07:55.687 "method": "bdev_wait_for_examine" 00:07:55.687 } 00:07:55.687 ] 00:07:55.687 } 00:07:55.687 ] 00:07:55.687 } 00:07:55.687 [2024-10-29 10:57:01.155860] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:55.687 [2024-10-29 10:57:01.155955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74149 ] 00:07:55.948 [2024-10-29 10:57:01.306187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.948 [2024-10-29 10:57:01.326931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.948 [2024-10-29 10:57:01.356907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.338  [2024-10-29T10:57:03.772Z] Copying: 181/512 [MB] (181 MBps) [2024-10-29T10:57:04.707Z] Copying: 352/512 [MB] (171 MBps) [2024-10-29T10:57:04.707Z] Copying: 512/512 [MB] (average 170 MBps) 00:07:59.210 00:07:59.210 10:57:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:59.470 10:57:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ nyv0hnk3dk3p568g44ps1x2b9ra4x9jc39n7sruylfvc2xokq89vq5kciv5gue6irqrsrz7303ti66n1gl8jwd1r7imhpmwg3brbjii3wwxf8pccg4fya9srpvisppchufnl5x4v30pvqrplrm6qob6jn2l5y1tbgjsiftkszj9jmiq9z1wejca7zo7ttd9mgsjavsug0rw01ddqe24kf74wpy55a81hi49r3ps7l66qxwci02d35fqp8txbd1yoxyurowrpfyuzfinevdyaa12k427m3fxs724ujlmnzyfi4nt9hve0mu9hlrujtjq0z4ihif3yx9wrwfsvc270oy24bbw0pfumaytvdgrm83m5acbe5gboake74j8sgc9ns6tapsn3s3mhzijbr3z3lya0e4dch66fxcw6nskapp5lfx0wxgbum7dm62bryh9bldueryat27h2yje9zv2g7ol4io4djxdx7onloqgn0mh19e59xomybfhps6sw4jj047ypge0z5sjnxoiioef0vz8t2bcpu1ljv3nrx28h12mdvd593iw8piyodjq6mm5s8pxim9cvnchbngeyr9ad8ci58b3t2x4327i4c7l8jl8h79mravt5sb0jtl8evccbt601c99niykat28jj8wk049xlmuz42odzhh82ys47u9onio5fcn33wtj50yvg6m2esnnzk1ath2x46gyvqyxoqwyti6tdkfsw9n5sula77v42k9lhowvaxx8ei0eotv1npj928u6cjokpb25doc5j7ht1boi444cmifb5yf5v4goq870yy9l4zi2dctte3kinw44i0w9eu7f25oxpyleva84t3ukx5elydnw9qyu09yny0jd1fch59nm6qms8ie2cb5s6cy615657zusvxvj6qfijc17o4gw7ck5avqq18r9ej1jrb1zb6eesv44bxwhkoxtrixnoad0znji4m8md4bfs1drqywsbsmutg1j79l1cpw6rpga1sjg2sgxhlv3 == \n\y\v\0\h\n\k\3\d\k\3\p\5\6\8\g\4\4\p\s\1\x\2\b\9\r\a\4\x\9\j\c\3\9\n\7\s\r\u\y\l\f\v\c\2\x\o\k\q\8\9\v\q\5\k\c\i\v\5\g\u\e\6\i\r\q\r\s\r\z\7\3\0\3\t\i\6\6\n\1\g\l\8\j\w\d\1\r\7\i\m\h\p\m\w\g\3\b\r\b\j\i\i\3\w\w\x\f\8\p\c\c\g\4\f\y\a\9\s\r\p\v\i\s\p\p\c\h\u\f\n\l\5\x\4\v\3\0\p\v\q\r\p\l\r\m\6\q\o\b\6\j\n\2\l\5\y\1\t\b\g\j\s\i\f\t\k\s\z\j\9\j\m\i\q\9\z\1\w\e\j\c\a\7\z\o\7\t\t\d\9\m\g\s\j\a\v\s\u\g\0\r\w\0\1\d\d\q\e\2\4\k\f\7\4\w\p\y\5\5\a\8\1\h\i\4\9\r\3\p\s\7\l\6\6\q\x\w\c\i\0\2\d\3\5\f\q\p\8\t\x\b\d\1\y\o\x\y\u\r\o\w\r\p\f\y\u\z\f\i\n\e\v\d\y\a\a\1\2\k\4\2\7\m\3\f\x\s\7\2\4\u\j\l\m\n\z\y\f\i\4\n\t\9\h\v\e\0\m\u\9\h\l\r\u\j\t\j\q\0\z\4\i\h\i\f\3\y\x\9\w\r\w\f\s\v\c\2\7\0\o\y\2\4\b\b\w\0\p\f\u\m\a\y\t\v\d\g\r\m\8\3\m\5\a\c\b\e\5\g\b\o\a\k\e\7\4\j\8\s\g\c\9\n\s\6\t\a\p\s\n\3\s\3\m\h\z\i\j\b\r\3\z\3\l\y\a\0\e\4\d\c\h\6\6\f\x\c\w\6\n\s\k\a\p\p\5\l\f\x\0\w\x\g\b\u\m\7\d\m\6\2\b\r\y\h\9\b\l\d\u\e\r\y\a\t\2\7\h\2\y\j\e\9\z\v\2\g\7\o\l\4\i\o\4\d\j\x\d\x\7\o\n\l\o\q\g\n\0\m\h\1\9\e\5\9\x\o\m\y\b\f\h\p\s\6\s\w\4\j\j\0\4\7\y\p\g\e\0\z\5\s\j\n\x\o\i\i\o\e\f\0\v\z\8\t\2\b\c\p\u\1\l\j\v\3\n\r\x\2\8\h\1\2\m\d\v\d\5\9\3\i\w\8\p\i\y\o\d\j\q\6\m\m\5\s\8\p\x\i\m\9\c\v\n\c\h\b\n\g\e\y\r\9\a\d\8\c\i\5\8\b\3\t\2\x\4\3\2\7\i\4\c\7\l\8\j\l\8\h\7\9\m\r\a\v\t\5\s\b\0\j\t\l\8\e\v\c\c\b\t\6\0\1\c\9\9\n\i\y\k\a\t\2\8\j\j\8\w\k\0\4\9\x\l\m\u\z\4\2\o\d\z\h\h\8\2\y\s\4\7\u\9\o\n\i\o\5\f\c\n\3\3\w\t\j\5\0\y\v\g\6\m\2\e\s\n\n\z\k\1\a\t\h\2\x\4\6\g\y\v\q\y\x\o\q\w\y\t\i\6\t\d\k\f\s\w\9\n\5\s\u\l\a\7\7\v\4\2\k\9\l\h\o\w\v\a\x\x\8\e\i\0\e\o\t\v\1\n\p\j\9\2\8\u\6\c\j\o\k\p\b\2\5\d\o\c\5\j\7\h\t\1\b\o\i\4\4\4\c\m\i\f\b\5\y\f\5\v\4\g\o\q\8\7\0\y\y\9\l\4\z\i\2\d\c\t\t\e\3\k\i\n\w\4\4\i\0\w\9\e\u\7\f\2\5\o\x\p\y\l\e\v\a\8\4\t\3\u\k\x\5\e\l\y\d\n\w\9\q\y\u\0\9\y\n\y\0\j\d\1\f\c\h\5\9\n\m\6\q\m\s\8\i\e\2\c\b\5\s\6\c\y\6\1\5\6\5\7\z\u\s\v\x\v\j\6\q\f\i\j\c\1\7\o\4\g\w\7\c\k\5\a\v\q\q\1\8\r\9\e\j\1\j\r\b\1\z\b\6\e\e\s\v\4\4\b\x\w\h\k\o\x\t\r\i\x\n\o\a\d\0\z\n\j\i\4\m\8\m\d\4\b\f\s\1\d\r\q\y\w\s\b\s\m\u\t\g\1\j\7\9\l\1\c\p\w\6\r\p\g\a\1\s\j\g\2\s\g\x\h\l\v\3 ]] 00:07:59.470 10:57:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:59.470 10:57:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ nyv0hnk3dk3p568g44ps1x2b9ra4x9jc39n7sruylfvc2xokq89vq5kciv5gue6irqrsrz7303ti66n1gl8jwd1r7imhpmwg3brbjii3wwxf8pccg4fya9srpvisppchufnl5x4v30pvqrplrm6qob6jn2l5y1tbgjsiftkszj9jmiq9z1wejca7zo7ttd9mgsjavsug0rw01ddqe24kf74wpy55a81hi49r3ps7l66qxwci02d35fqp8txbd1yoxyurowrpfyuzfinevdyaa12k427m3fxs724ujlmnzyfi4nt9hve0mu9hlrujtjq0z4ihif3yx9wrwfsvc270oy24bbw0pfumaytvdgrm83m5acbe5gboake74j8sgc9ns6tapsn3s3mhzijbr3z3lya0e4dch66fxcw6nskapp5lfx0wxgbum7dm62bryh9bldueryat27h2yje9zv2g7ol4io4djxdx7onloqgn0mh19e59xomybfhps6sw4jj047ypge0z5sjnxoiioef0vz8t2bcpu1ljv3nrx28h12mdvd593iw8piyodjq6mm5s8pxim9cvnchbngeyr9ad8ci58b3t2x4327i4c7l8jl8h79mravt5sb0jtl8evccbt601c99niykat28jj8wk049xlmuz42odzhh82ys47u9onio5fcn33wtj50yvg6m2esnnzk1ath2x46gyvqyxoqwyti6tdkfsw9n5sula77v42k9lhowvaxx8ei0eotv1npj928u6cjokpb25doc5j7ht1boi444cmifb5yf5v4goq870yy9l4zi2dctte3kinw44i0w9eu7f25oxpyleva84t3ukx5elydnw9qyu09yny0jd1fch59nm6qms8ie2cb5s6cy615657zusvxvj6qfijc17o4gw7ck5avqq18r9ej1jrb1zb6eesv44bxwhkoxtrixnoad0znji4m8md4bfs1drqywsbsmutg1j79l1cpw6rpga1sjg2sgxhlv3 == \n\y\v\0\h\n\k\3\d\k\3\p\5\6\8\g\4\4\p\s\1\x\2\b\9\r\a\4\x\9\j\c\3\9\n\7\s\r\u\y\l\f\v\c\2\x\o\k\q\8\9\v\q\5\k\c\i\v\5\g\u\e\6\i\r\q\r\s\r\z\7\3\0\3\t\i\6\6\n\1\g\l\8\j\w\d\1\r\7\i\m\h\p\m\w\g\3\b\r\b\j\i\i\3\w\w\x\f\8\p\c\c\g\4\f\y\a\9\s\r\p\v\i\s\p\p\c\h\u\f\n\l\5\x\4\v\3\0\p\v\q\r\p\l\r\m\6\q\o\b\6\j\n\2\l\5\y\1\t\b\g\j\s\i\f\t\k\s\z\j\9\j\m\i\q\9\z\1\w\e\j\c\a\7\z\o\7\t\t\d\9\m\g\s\j\a\v\s\u\g\0\r\w\0\1\d\d\q\e\2\4\k\f\7\4\w\p\y\5\5\a\8\1\h\i\4\9\r\3\p\s\7\l\6\6\q\x\w\c\i\0\2\d\3\5\f\q\p\8\t\x\b\d\1\y\o\x\y\u\r\o\w\r\p\f\y\u\z\f\i\n\e\v\d\y\a\a\1\2\k\4\2\7\m\3\f\x\s\7\2\4\u\j\l\m\n\z\y\f\i\4\n\t\9\h\v\e\0\m\u\9\h\l\r\u\j\t\j\q\0\z\4\i\h\i\f\3\y\x\9\w\r\w\f\s\v\c\2\7\0\o\y\2\4\b\b\w\0\p\f\u\m\a\y\t\v\d\g\r\m\8\3\m\5\a\c\b\e\5\g\b\o\a\k\e\7\4\j\8\s\g\c\9\n\s\6\t\a\p\s\n\3\s\3\m\h\z\i\j\b\r\3\z\3\l\y\a\0\e\4\d\c\h\6\6\f\x\c\w\6\n\s\k\a\p\p\5\l\f\x\0\w\x\g\b\u\m\7\d\m\6\2\b\r\y\h\9\b\l\d\u\e\r\y\a\t\2\7\h\2\y\j\e\9\z\v\2\g\7\o\l\4\i\o\4\d\j\x\d\x\7\o\n\l\o\q\g\n\0\m\h\1\9\e\5\9\x\o\m\y\b\f\h\p\s\6\s\w\4\j\j\0\4\7\y\p\g\e\0\z\5\s\j\n\x\o\i\i\o\e\f\0\v\z\8\t\2\b\c\p\u\1\l\j\v\3\n\r\x\2\8\h\1\2\m\d\v\d\5\9\3\i\w\8\p\i\y\o\d\j\q\6\m\m\5\s\8\p\x\i\m\9\c\v\n\c\h\b\n\g\e\y\r\9\a\d\8\c\i\5\8\b\3\t\2\x\4\3\2\7\i\4\c\7\l\8\j\l\8\h\7\9\m\r\a\v\t\5\s\b\0\j\t\l\8\e\v\c\c\b\t\6\0\1\c\9\9\n\i\y\k\a\t\2\8\j\j\8\w\k\0\4\9\x\l\m\u\z\4\2\o\d\z\h\h\8\2\y\s\4\7\u\9\o\n\i\o\5\f\c\n\3\3\w\t\j\5\0\y\v\g\6\m\2\e\s\n\n\z\k\1\a\t\h\2\x\4\6\g\y\v\q\y\x\o\q\w\y\t\i\6\t\d\k\f\s\w\9\n\5\s\u\l\a\7\7\v\4\2\k\9\l\h\o\w\v\a\x\x\8\e\i\0\e\o\t\v\1\n\p\j\9\2\8\u\6\c\j\o\k\p\b\2\5\d\o\c\5\j\7\h\t\1\b\o\i\4\4\4\c\m\i\f\b\5\y\f\5\v\4\g\o\q\8\7\0\y\y\9\l\4\z\i\2\d\c\t\t\e\3\k\i\n\w\4\4\i\0\w\9\e\u\7\f\2\5\o\x\p\y\l\e\v\a\8\4\t\3\u\k\x\5\e\l\y\d\n\w\9\q\y\u\0\9\y\n\y\0\j\d\1\f\c\h\5\9\n\m\6\q\m\s\8\i\e\2\c\b\5\s\6\c\y\6\1\5\6\5\7\z\u\s\v\x\v\j\6\q\f\i\j\c\1\7\o\4\g\w\7\c\k\5\a\v\q\q\1\8\r\9\e\j\1\j\r\b\1\z\b\6\e\e\s\v\4\4\b\x\w\h\k\o\x\t\r\i\x\n\o\a\d\0\z\n\j\i\4\m\8\m\d\4\b\f\s\1\d\r\q\y\w\s\b\s\m\u\t\g\1\j\7\9\l\1\c\p\w\6\r\p\g\a\1\s\j\g\2\s\g\x\h\l\v\3 ]] 00:07:59.470 10:57:04 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:59.729 10:57:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:59.729 10:57:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:59.729 10:57:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:59.729 10:57:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:59.729 [2024-10-29 10:57:05.107725] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:07:59.729 [2024-10-29 10:57:05.107844] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74211 ] 00:07:59.729 { 00:07:59.729 "subsystems": [ 00:07:59.729 { 00:07:59.729 "subsystem": "bdev", 00:07:59.729 "config": [ 00:07:59.729 { 00:07:59.729 "params": { 00:07:59.729 "block_size": 512, 00:07:59.729 "num_blocks": 1048576, 00:07:59.729 "name": "malloc0" 00:07:59.729 }, 00:07:59.729 "method": "bdev_malloc_create" 00:07:59.729 }, 00:07:59.729 { 00:07:59.729 "params": { 00:07:59.729 "filename": "/dev/zram1", 00:07:59.729 "name": "uring0" 00:07:59.729 }, 00:07:59.729 "method": "bdev_uring_create" 00:07:59.729 }, 00:07:59.729 { 00:07:59.729 "method": "bdev_wait_for_examine" 00:07:59.729 } 00:07:59.729 ] 00:07:59.729 } 00:07:59.729 ] 00:07:59.729 } 00:07:59.988 [2024-10-29 10:57:05.254553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.988 [2024-10-29 10:57:05.273359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.988 [2024-10-29 10:57:05.301307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.367  [2024-10-29T10:57:07.436Z] Copying: 170/512 [MB] (170 MBps) [2024-10-29T10:57:08.828Z] Copying: 352/512 [MB] (182 MBps) [2024-10-29T10:57:08.828Z] Copying: 512/512 [MB] (average 171 MBps) 00:08:03.331 00:08:03.331 10:57:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:03.331 10:57:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:03.331 10:57:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:03.331 10:57:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:03.331 10:57:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:03.331 10:57:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:03.331 10:57:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:03.331 10:57:08 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:03.331 { 00:08:03.331 "subsystems": [ 00:08:03.331 { 00:08:03.331 "subsystem": "bdev", 00:08:03.331 "config": [ 00:08:03.331 { 00:08:03.331 "params": { 00:08:03.331 "block_size": 512, 00:08:03.331 "num_blocks": 1048576, 00:08:03.331 "name": "malloc0" 00:08:03.331 }, 00:08:03.331 "method": "bdev_malloc_create" 00:08:03.331 }, 00:08:03.331 { 00:08:03.331 "params": { 00:08:03.331 "filename": "/dev/zram1", 00:08:03.331 "name": "uring0" 00:08:03.331 }, 00:08:03.331 "method": "bdev_uring_create" 00:08:03.331 }, 00:08:03.331 { 00:08:03.331 "params": { 00:08:03.331 "name": "uring0" 00:08:03.331 }, 00:08:03.331 "method": "bdev_uring_delete" 00:08:03.331 }, 00:08:03.331 { 00:08:03.331 "method": "bdev_wait_for_examine" 00:08:03.331 } 00:08:03.331 ] 00:08:03.331 } 00:08:03.331 ] 00:08:03.331 } 00:08:03.331 [2024-10-29 10:57:08.694013] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:03.331 [2024-10-29 10:57:08.694106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74261 ] 00:08:03.591 [2024-10-29 10:57:08.841547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.591 [2024-10-29 10:57:08.861798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.591 [2024-10-29 10:57:08.891242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.591  [2024-10-29T10:57:09.348Z] Copying: 0/0 [B] (average 0 Bps) 00:08:03.851 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.851 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:03.851 [2024-10-29 10:57:09.279299] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:03.851 [2024-10-29 10:57:09.279420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74285 ] 00:08:03.851 { 00:08:03.851 "subsystems": [ 00:08:03.851 { 00:08:03.851 "subsystem": "bdev", 00:08:03.851 "config": [ 00:08:03.851 { 00:08:03.851 "params": { 00:08:03.851 "block_size": 512, 00:08:03.851 "num_blocks": 1048576, 00:08:03.851 "name": "malloc0" 00:08:03.851 }, 00:08:03.851 "method": "bdev_malloc_create" 00:08:03.851 }, 00:08:03.851 { 00:08:03.851 "params": { 00:08:03.851 "filename": "/dev/zram1", 00:08:03.851 "name": "uring0" 00:08:03.851 }, 00:08:03.851 "method": "bdev_uring_create" 00:08:03.851 }, 00:08:03.851 { 00:08:03.851 "params": { 00:08:03.851 "name": "uring0" 00:08:03.851 }, 00:08:03.851 "method": "bdev_uring_delete" 00:08:03.851 }, 00:08:03.851 { 00:08:03.851 "method": "bdev_wait_for_examine" 00:08:03.851 } 00:08:03.851 ] 00:08:03.851 } 00:08:03.851 ] 00:08:03.851 } 00:08:04.110 [2024-10-29 10:57:09.429467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.110 [2024-10-29 10:57:09.465133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.110 [2024-10-29 10:57:09.508255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.369 [2024-10-29 10:57:09.650873] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:04.369 [2024-10-29 10:57:09.650943] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:04.369 [2024-10-29 10:57:09.650958] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:04.369 [2024-10-29 10:57:09.650971] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.369 [2024-10-29 10:57:09.832181] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:04.627 10:57:09 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:04.885 00:08:04.885 real 0m12.671s 00:08:04.885 user 0m8.547s 00:08:04.885 sys 0m11.011s 00:08:04.885 10:57:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.885 10:57:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:04.885 ************************************ 00:08:04.885 END TEST dd_uring_copy 00:08:04.885 ************************************ 00:08:04.885 00:08:04.885 real 0m12.920s 00:08:04.885 user 0m8.695s 00:08:04.885 sys 0m11.111s 00:08:04.885 10:57:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.885 10:57:10 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:04.885 ************************************ 00:08:04.885 END TEST spdk_dd_uring 00:08:04.885 ************************************ 00:08:04.885 10:57:10 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:04.885 10:57:10 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:04.885 10:57:10 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.885 10:57:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:04.885 ************************************ 00:08:04.885 START TEST spdk_dd_sparse 00:08:04.885 ************************************ 00:08:04.885 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:04.885 * Looking for test storage... 00:08:04.885 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:04.885 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:04.885 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:08:04.885 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:05.143 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:05.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.144 --rc genhtml_branch_coverage=1 00:08:05.144 --rc genhtml_function_coverage=1 00:08:05.144 --rc genhtml_legend=1 00:08:05.144 --rc geninfo_all_blocks=1 00:08:05.144 --rc geninfo_unexecuted_blocks=1 00:08:05.144 00:08:05.144 ' 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:05.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.144 --rc genhtml_branch_coverage=1 00:08:05.144 --rc genhtml_function_coverage=1 00:08:05.144 --rc genhtml_legend=1 00:08:05.144 --rc geninfo_all_blocks=1 00:08:05.144 --rc geninfo_unexecuted_blocks=1 00:08:05.144 00:08:05.144 ' 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:05.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.144 --rc genhtml_branch_coverage=1 00:08:05.144 --rc genhtml_function_coverage=1 00:08:05.144 --rc genhtml_legend=1 00:08:05.144 --rc geninfo_all_blocks=1 00:08:05.144 --rc geninfo_unexecuted_blocks=1 00:08:05.144 00:08:05.144 ' 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:05.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.144 --rc genhtml_branch_coverage=1 00:08:05.144 --rc genhtml_function_coverage=1 00:08:05.144 --rc genhtml_legend=1 00:08:05.144 --rc geninfo_all_blocks=1 00:08:05.144 --rc geninfo_unexecuted_blocks=1 00:08:05.144 00:08:05.144 ' 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:05.144 1+0 records in 00:08:05.144 1+0 records out 00:08:05.144 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00425243 s, 986 MB/s 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:05.144 1+0 records in 00:08:05.144 1+0 records out 00:08:05.144 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00594061 s, 706 MB/s 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:05.144 1+0 records in 00:08:05.144 1+0 records out 00:08:05.144 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00493382 s, 850 MB/s 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:05.144 ************************************ 00:08:05.144 START TEST dd_sparse_file_to_file 00:08:05.144 ************************************ 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:05.144 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:05.144 [2024-10-29 10:57:10.518682] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:05.144 [2024-10-29 10:57:10.518826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74385 ] 00:08:05.144 { 00:08:05.144 "subsystems": [ 00:08:05.144 { 00:08:05.144 "subsystem": "bdev", 00:08:05.144 "config": [ 00:08:05.144 { 00:08:05.144 "params": { 00:08:05.144 "block_size": 4096, 00:08:05.144 "filename": "dd_sparse_aio_disk", 00:08:05.144 "name": "dd_aio" 00:08:05.144 }, 00:08:05.144 "method": "bdev_aio_create" 00:08:05.144 }, 00:08:05.144 { 00:08:05.144 "params": { 00:08:05.144 "lvs_name": "dd_lvstore", 00:08:05.144 "bdev_name": "dd_aio" 00:08:05.144 }, 00:08:05.144 "method": "bdev_lvol_create_lvstore" 00:08:05.144 }, 00:08:05.144 { 00:08:05.144 "method": "bdev_wait_for_examine" 00:08:05.144 } 00:08:05.144 ] 00:08:05.144 } 00:08:05.144 ] 00:08:05.144 } 00:08:05.403 [2024-10-29 10:57:10.667950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.403 [2024-10-29 10:57:10.693026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.403 [2024-10-29 10:57:10.725896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.403  [2024-10-29T10:57:11.159Z] Copying: 12/36 [MB] (average 1000 MBps) 00:08:05.662 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:05.662 00:08:05.662 real 0m0.513s 00:08:05.662 user 0m0.308s 00:08:05.662 sys 0m0.254s 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:05.662 10:57:10 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:05.662 ************************************ 00:08:05.662 END TEST dd_sparse_file_to_file 00:08:05.662 ************************************ 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:05.662 ************************************ 00:08:05.662 START TEST dd_sparse_file_to_bdev 00:08:05.662 ************************************ 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:05.662 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:05.662 [2024-10-29 10:57:11.082248] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:05.662 [2024-10-29 10:57:11.082343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74433 ] 00:08:05.662 { 00:08:05.662 "subsystems": [ 00:08:05.662 { 00:08:05.662 "subsystem": "bdev", 00:08:05.662 "config": [ 00:08:05.662 { 00:08:05.662 "params": { 00:08:05.662 "block_size": 4096, 00:08:05.662 "filename": "dd_sparse_aio_disk", 00:08:05.662 "name": "dd_aio" 00:08:05.662 }, 00:08:05.662 "method": "bdev_aio_create" 00:08:05.662 }, 00:08:05.662 { 00:08:05.662 "params": { 00:08:05.662 "lvs_name": "dd_lvstore", 00:08:05.662 "lvol_name": "dd_lvol", 00:08:05.662 "size_in_mib": 36, 00:08:05.662 "thin_provision": true 00:08:05.662 }, 00:08:05.662 "method": "bdev_lvol_create" 00:08:05.662 }, 00:08:05.662 { 00:08:05.662 "method": "bdev_wait_for_examine" 00:08:05.662 } 00:08:05.662 ] 00:08:05.662 } 00:08:05.662 ] 00:08:05.662 } 00:08:05.921 [2024-10-29 10:57:11.229898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.921 [2024-10-29 10:57:11.254144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.921 [2024-10-29 10:57:11.290933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.921  [2024-10-29T10:57:11.677Z] Copying: 12/36 [MB] (average 480 MBps) 00:08:06.180 00:08:06.180 00:08:06.180 real 0m0.477s 00:08:06.180 user 0m0.293s 00:08:06.180 sys 0m0.241s 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.180 ************************************ 00:08:06.180 END TEST dd_sparse_file_to_bdev 00:08:06.180 ************************************ 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:06.180 ************************************ 00:08:06.180 START TEST dd_sparse_bdev_to_file 00:08:06.180 ************************************ 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:06.180 10:57:11 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:06.180 [2024-10-29 10:57:11.606087] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:06.181 [2024-10-29 10:57:11.606176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74460 ] 00:08:06.181 { 00:08:06.181 "subsystems": [ 00:08:06.181 { 00:08:06.181 "subsystem": "bdev", 00:08:06.181 "config": [ 00:08:06.181 { 00:08:06.181 "params": { 00:08:06.181 "block_size": 4096, 00:08:06.181 "filename": "dd_sparse_aio_disk", 00:08:06.181 "name": "dd_aio" 00:08:06.181 }, 00:08:06.181 "method": "bdev_aio_create" 00:08:06.181 }, 00:08:06.181 { 00:08:06.181 "method": "bdev_wait_for_examine" 00:08:06.181 } 00:08:06.181 ] 00:08:06.181 } 00:08:06.181 ] 00:08:06.181 } 00:08:06.440 [2024-10-29 10:57:11.753640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.440 [2024-10-29 10:57:11.774156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.440 [2024-10-29 10:57:11.807402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.440  [2024-10-29T10:57:12.195Z] Copying: 12/36 [MB] (average 1200 MBps) 00:08:06.698 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:06.699 00:08:06.699 real 0m0.471s 00:08:06.699 user 0m0.261s 00:08:06.699 sys 0m0.253s 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:06.699 ************************************ 00:08:06.699 END TEST dd_sparse_bdev_to_file 00:08:06.699 ************************************ 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:06.699 00:08:06.699 real 0m1.860s 00:08:06.699 user 0m1.046s 00:08:06.699 sys 0m0.954s 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.699 10:57:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:06.699 ************************************ 00:08:06.699 END TEST spdk_dd_sparse 00:08:06.699 ************************************ 00:08:06.699 10:57:12 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:06.699 10:57:12 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:06.699 10:57:12 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.699 10:57:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:06.699 ************************************ 00:08:06.699 START TEST spdk_dd_negative 00:08:06.699 ************************************ 00:08:06.699 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:06.958 * Looking for test storage... 00:08:06.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:06.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.958 --rc genhtml_branch_coverage=1 00:08:06.958 --rc genhtml_function_coverage=1 00:08:06.958 --rc genhtml_legend=1 00:08:06.958 --rc geninfo_all_blocks=1 00:08:06.958 --rc geninfo_unexecuted_blocks=1 00:08:06.958 00:08:06.958 ' 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:06.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.958 --rc genhtml_branch_coverage=1 00:08:06.958 --rc genhtml_function_coverage=1 00:08:06.958 --rc genhtml_legend=1 00:08:06.958 --rc geninfo_all_blocks=1 00:08:06.958 --rc geninfo_unexecuted_blocks=1 00:08:06.958 00:08:06.958 ' 00:08:06.958 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:06.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.958 --rc genhtml_branch_coverage=1 00:08:06.959 --rc genhtml_function_coverage=1 00:08:06.959 --rc genhtml_legend=1 00:08:06.959 --rc geninfo_all_blocks=1 00:08:06.959 --rc geninfo_unexecuted_blocks=1 00:08:06.959 00:08:06.959 ' 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:06.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.959 --rc genhtml_branch_coverage=1 00:08:06.959 --rc genhtml_function_coverage=1 00:08:06.959 --rc genhtml_legend=1 00:08:06.959 --rc geninfo_all_blocks=1 00:08:06.959 --rc geninfo_unexecuted_blocks=1 00:08:06.959 00:08:06.959 ' 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:06.959 ************************************ 00:08:06.959 START TEST dd_invalid_arguments 00:08:06.959 ************************************ 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.959 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:06.959 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:06.959 00:08:06.959 CPU options: 00:08:06.959 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:06.959 (like [0,1,10]) 00:08:06.959 --lcores lcore to CPU mapping list. The list is in the format: 00:08:06.959 [<,lcores[@CPUs]>...] 00:08:06.959 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:06.959 Within the group, '-' is used for range separator, 00:08:06.959 ',' is used for single number separator. 00:08:06.959 '( )' can be omitted for single element group, 00:08:06.959 '@' can be omitted if cpus and lcores have the same value 00:08:06.959 --disable-cpumask-locks Disable CPU core lock files. 00:08:06.959 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:06.959 pollers in the app support interrupt mode) 00:08:06.959 -p, --main-core main (primary) core for DPDK 00:08:06.959 00:08:06.959 Configuration options: 00:08:06.959 -c, --config, --json JSON config file 00:08:06.959 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:06.959 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:06.959 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:06.959 --rpcs-allowed comma-separated list of permitted RPCS 00:08:06.959 --json-ignore-init-errors don't exit on invalid config entry 00:08:06.959 00:08:06.959 Memory options: 00:08:06.959 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:06.959 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:06.959 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:06.959 -R, --huge-unlink unlink huge files after initialization 00:08:06.959 -n, --mem-channels number of memory channels used for DPDK 00:08:06.959 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:06.959 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:06.959 --no-huge run without using hugepages 00:08:06.959 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:06.959 -i, --shm-id shared memory ID (optional) 00:08:06.959 -g, --single-file-segments force creating just one hugetlbfs file 00:08:06.959 00:08:06.959 PCI options: 00:08:06.959 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:06.959 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:06.959 -u, --no-pci disable PCI access 00:08:06.959 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:06.959 00:08:06.959 Log options: 00:08:06.959 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:06.959 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:06.959 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:06.959 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:06.959 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:08:06.959 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:08:06.959 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:08:06.959 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:08:06.959 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:06.959 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:08:06.959 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:08:06.959 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:08:06.959 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:06.959 --silence-noticelog disable notice level logging to stderr 00:08:06.959 00:08:06.959 Trace options: 00:08:06.959 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:06.959 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:06.959 [2024-10-29 10:57:12.397528] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:06.959 setting 0 to disable trace (default 32768) 00:08:06.959 Tracepoints vary in size and can use more than one trace entry. 00:08:06.959 -e, --tpoint-group [:] 00:08:06.959 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:08:06.959 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:08:06.959 blob, bdev_raid, scheduler, all). 00:08:06.959 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:06.959 a tracepoint group. First tpoint inside a group can be enabled by 00:08:06.959 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:06.959 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:06.959 in /include/spdk_internal/trace_defs.h 00:08:06.959 00:08:06.959 Other options: 00:08:06.959 -h, --help show this usage 00:08:06.959 -v, --version print SPDK version 00:08:06.959 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:06.959 --env-context Opaque context for use of the env implementation 00:08:06.959 00:08:06.960 Application specific: 00:08:06.960 [--------- DD Options ---------] 00:08:06.960 --if Input file. Must specify either --if or --ib. 00:08:06.960 --ib Input bdev. Must specifier either --if or --ib 00:08:06.960 --of Output file. Must specify either --of or --ob. 00:08:06.960 --ob Output bdev. Must specify either --of or --ob. 00:08:06.960 --iflag Input file flags. 00:08:06.960 --oflag Output file flags. 00:08:06.960 --bs I/O unit size (default: 4096) 00:08:06.960 --qd Queue depth (default: 2) 00:08:06.960 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:06.960 --skip Skip this many I/O units at start of input. (default: 0) 00:08:06.960 --seek Skip this many I/O units at start of output. (default: 0) 00:08:06.960 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:06.960 --sparse Enable hole skipping in input target 00:08:06.960 Available iflag and oflag values: 00:08:06.960 append - append mode 00:08:06.960 direct - use direct I/O for data 00:08:06.960 directory - fail unless a directory 00:08:06.960 dsync - use synchronized I/O for data 00:08:06.960 noatime - do not update access time 00:08:06.960 noctty - do not assign controlling terminal from file 00:08:06.960 nofollow - do not follow symlinks 00:08:06.960 nonblock - use non-blocking I/O 00:08:06.960 sync - use synchronized I/O for data and metadata 00:08:06.960 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:06.960 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.960 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:06.960 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.960 00:08:06.960 real 0m0.070s 00:08:06.960 user 0m0.042s 00:08:06.960 sys 0m0.026s 00:08:06.960 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.960 10:57:12 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:06.960 ************************************ 00:08:06.960 END TEST dd_invalid_arguments 00:08:06.960 ************************************ 00:08:06.960 10:57:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:06.960 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:06.960 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.960 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.219 ************************************ 00:08:07.219 START TEST dd_double_input 00:08:07.219 ************************************ 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:07.219 [2024-10-29 10:57:12.521189] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.219 00:08:07.219 real 0m0.077s 00:08:07.219 user 0m0.051s 00:08:07.219 sys 0m0.025s 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:07.219 ************************************ 00:08:07.219 END TEST dd_double_input 00:08:07.219 ************************************ 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.219 ************************************ 00:08:07.219 START TEST dd_double_output 00:08:07.219 ************************************ 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:07.219 [2024-10-29 10:57:12.649337] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.219 00:08:07.219 real 0m0.076s 00:08:07.219 user 0m0.052s 00:08:07.219 sys 0m0.023s 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:07.219 ************************************ 00:08:07.219 END TEST dd_double_output 00:08:07.219 ************************************ 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.219 ************************************ 00:08:07.219 START TEST dd_no_input 00:08:07.219 ************************************ 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.219 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:07.478 [2024-10-29 10:57:12.771017] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.478 00:08:07.478 real 0m0.078s 00:08:07.478 user 0m0.058s 00:08:07.478 sys 0m0.020s 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.478 ************************************ 00:08:07.478 END TEST dd_no_input 00:08:07.478 ************************************ 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.478 ************************************ 00:08:07.478 START TEST dd_no_output 00:08:07.478 ************************************ 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:07.478 [2024-10-29 10:57:12.898983] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.478 00:08:07.478 real 0m0.079s 00:08:07.478 user 0m0.052s 00:08:07.478 sys 0m0.026s 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.478 ************************************ 00:08:07.478 END TEST dd_no_output 00:08:07.478 ************************************ 00:08:07.478 10:57:12 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.479 ************************************ 00:08:07.479 START TEST dd_wrong_blocksize 00:08:07.479 ************************************ 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.479 10:57:12 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:07.739 [2024-10-29 10:57:13.028079] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.739 00:08:07.739 real 0m0.078s 00:08:07.739 user 0m0.053s 00:08:07.739 sys 0m0.024s 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:07.739 ************************************ 00:08:07.739 END TEST dd_wrong_blocksize 00:08:07.739 ************************************ 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:07.739 ************************************ 00:08:07.739 START TEST dd_smaller_blocksize 00:08:07.739 ************************************ 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.739 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.740 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.740 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.740 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.740 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:07.740 [2024-10-29 10:57:13.154015] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:07.740 [2024-10-29 10:57:13.154105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74692 ] 00:08:07.998 [2024-10-29 10:57:13.307414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.998 [2024-10-29 10:57:13.332711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.998 [2024-10-29 10:57:13.366312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.998 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:07.998 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:07.998 [2024-10-29 10:57:13.384744] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:07.998 [2024-10-29 10:57:13.384778] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.998 [2024-10-29 10:57:13.455035] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.257 00:08:08.257 real 0m0.425s 00:08:08.257 user 0m0.223s 00:08:08.257 sys 0m0.098s 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:08.257 ************************************ 00:08:08.257 END TEST dd_smaller_blocksize 00:08:08.257 ************************************ 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.257 ************************************ 00:08:08.257 START TEST dd_invalid_count 00:08:08.257 ************************************ 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.257 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:08.258 [2024-10-29 10:57:13.636152] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.258 00:08:08.258 real 0m0.081s 00:08:08.258 user 0m0.052s 00:08:08.258 sys 0m0.028s 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.258 ************************************ 00:08:08.258 END TEST dd_invalid_count 00:08:08.258 ************************************ 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.258 ************************************ 00:08:08.258 START TEST dd_invalid_oflag 00:08:08.258 ************************************ 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.258 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:08.517 [2024-10-29 10:57:13.768555] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.517 00:08:08.517 real 0m0.079s 00:08:08.517 user 0m0.052s 00:08:08.517 sys 0m0.026s 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.517 ************************************ 00:08:08.517 END TEST dd_invalid_oflag 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:08.517 ************************************ 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.517 ************************************ 00:08:08.517 START TEST dd_invalid_iflag 00:08:08.517 ************************************ 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:08.517 [2024-10-29 10:57:13.891691] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:08.517 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:08.518 00:08:08.518 real 0m0.073s 00:08:08.518 user 0m0.044s 00:08:08.518 sys 0m0.028s 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:08.518 ************************************ 00:08:08.518 END TEST dd_invalid_iflag 00:08:08.518 ************************************ 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:08.518 ************************************ 00:08:08.518 START TEST dd_unknown_flag 00:08:08.518 ************************************ 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.518 10:57:13 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:08.777 [2024-10-29 10:57:14.024299] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:08.777 [2024-10-29 10:57:14.024405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74784 ] 00:08:08.777 [2024-10-29 10:57:14.178514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.777 [2024-10-29 10:57:14.204266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.777 [2024-10-29 10:57:14.239185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.777 [2024-10-29 10:57:14.257353] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:08.777 [2024-10-29 10:57:14.257449] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.777 [2024-10-29 10:57:14.257539] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:08.777 [2024-10-29 10:57:14.257563] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.777 [2024-10-29 10:57:14.257922] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:08.777 [2024-10-29 10:57:14.257958] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.777 [2024-10-29 10:57:14.258045] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:08.777 [2024-10-29 10:57:14.258069] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:09.036 [2024-10-29 10:57:14.325861] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.036 00:08:09.036 real 0m0.421s 00:08:09.036 user 0m0.212s 00:08:09.036 sys 0m0.118s 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.036 ************************************ 00:08:09.036 END TEST dd_unknown_flag 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:09.036 ************************************ 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:09.036 ************************************ 00:08:09.036 START TEST dd_invalid_json 00:08:09.036 ************************************ 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:09.036 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:09.036 [2024-10-29 10:57:14.494358] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:09.036 [2024-10-29 10:57:14.494484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74807 ] 00:08:09.296 [2024-10-29 10:57:14.650219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.296 [2024-10-29 10:57:14.674508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.296 [2024-10-29 10:57:14.674608] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:09.296 [2024-10-29 10:57:14.674640] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:09.296 [2024-10-29 10:57:14.674659] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.296 [2024-10-29 10:57:14.674717] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.296 00:08:09.296 real 0m0.291s 00:08:09.296 user 0m0.133s 00:08:09.296 sys 0m0.057s 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:09.296 ************************************ 00:08:09.296 END TEST dd_invalid_json 00:08:09.296 ************************************ 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:09.296 ************************************ 00:08:09.296 START TEST dd_invalid_seek 00:08:09.296 ************************************ 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:09.296 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:09.297 10:57:14 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:09.556 [2024-10-29 10:57:14.839350] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:09.556 [2024-10-29 10:57:14.839461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74835 ] 00:08:09.556 { 00:08:09.556 "subsystems": [ 00:08:09.556 { 00:08:09.556 "subsystem": "bdev", 00:08:09.556 "config": [ 00:08:09.556 { 00:08:09.556 "params": { 00:08:09.556 "block_size": 512, 00:08:09.556 "num_blocks": 512, 00:08:09.556 "name": "malloc0" 00:08:09.556 }, 00:08:09.556 "method": "bdev_malloc_create" 00:08:09.556 }, 00:08:09.556 { 00:08:09.556 "params": { 00:08:09.556 "block_size": 512, 00:08:09.556 "num_blocks": 512, 00:08:09.556 "name": "malloc1" 00:08:09.556 }, 00:08:09.556 "method": "bdev_malloc_create" 00:08:09.556 }, 00:08:09.556 { 00:08:09.556 "method": "bdev_wait_for_examine" 00:08:09.556 } 00:08:09.556 ] 00:08:09.556 } 00:08:09.556 ] 00:08:09.556 } 00:08:09.556 [2024-10-29 10:57:14.994143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.556 [2024-10-29 10:57:15.018290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.556 [2024-10-29 10:57:15.053365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.816 [2024-10-29 10:57:15.098544] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:09.816 [2024-10-29 10:57:15.098618] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.816 [2024-10-29 10:57:15.169112] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.816 00:08:09.816 real 0m0.447s 00:08:09.816 user 0m0.290s 00:08:09.816 sys 0m0.118s 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:09.816 ************************************ 00:08:09.816 END TEST dd_invalid_seek 00:08:09.816 ************************************ 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:09.816 ************************************ 00:08:09.816 START TEST dd_invalid_skip 00:08:09.816 ************************************ 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:09.816 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:10.083 [2024-10-29 10:57:15.338236] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:10.083 [2024-10-29 10:57:15.338337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74870 ] 00:08:10.083 { 00:08:10.083 "subsystems": [ 00:08:10.083 { 00:08:10.083 "subsystem": "bdev", 00:08:10.083 "config": [ 00:08:10.083 { 00:08:10.083 "params": { 00:08:10.083 "block_size": 512, 00:08:10.083 "num_blocks": 512, 00:08:10.083 "name": "malloc0" 00:08:10.083 }, 00:08:10.083 "method": "bdev_malloc_create" 00:08:10.083 }, 00:08:10.083 { 00:08:10.083 "params": { 00:08:10.083 "block_size": 512, 00:08:10.083 "num_blocks": 512, 00:08:10.083 "name": "malloc1" 00:08:10.083 }, 00:08:10.083 "method": "bdev_malloc_create" 00:08:10.083 }, 00:08:10.083 { 00:08:10.083 "method": "bdev_wait_for_examine" 00:08:10.083 } 00:08:10.083 ] 00:08:10.083 } 00:08:10.083 ] 00:08:10.083 } 00:08:10.083 [2024-10-29 10:57:15.492601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.083 [2024-10-29 10:57:15.517597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.083 [2024-10-29 10:57:15.552307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.368 [2024-10-29 10:57:15.597930] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:10.368 [2024-10-29 10:57:15.598015] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.368 [2024-10-29 10:57:15.666677] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.368 00:08:10.368 real 0m0.447s 00:08:10.368 user 0m0.285s 00:08:10.368 sys 0m0.123s 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:10.368 ************************************ 00:08:10.368 END TEST dd_invalid_skip 00:08:10.368 ************************************ 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:10.368 ************************************ 00:08:10.368 START TEST dd_invalid_input_count 00:08:10.368 ************************************ 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.368 10:57:15 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:10.368 [2024-10-29 10:57:15.829450] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:10.368 [2024-10-29 10:57:15.829521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74898 ] 00:08:10.368 { 00:08:10.368 "subsystems": [ 00:08:10.368 { 00:08:10.368 "subsystem": "bdev", 00:08:10.368 "config": [ 00:08:10.368 { 00:08:10.368 "params": { 00:08:10.368 "block_size": 512, 00:08:10.368 "num_blocks": 512, 00:08:10.368 "name": "malloc0" 00:08:10.368 }, 00:08:10.368 "method": "bdev_malloc_create" 00:08:10.368 }, 00:08:10.368 { 00:08:10.368 "params": { 00:08:10.368 "block_size": 512, 00:08:10.368 "num_blocks": 512, 00:08:10.368 "name": "malloc1" 00:08:10.368 }, 00:08:10.368 "method": "bdev_malloc_create" 00:08:10.368 }, 00:08:10.368 { 00:08:10.368 "method": "bdev_wait_for_examine" 00:08:10.368 } 00:08:10.368 ] 00:08:10.368 } 00:08:10.368 ] 00:08:10.368 } 00:08:10.628 [2024-10-29 10:57:15.971891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.628 [2024-10-29 10:57:15.993190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.628 [2024-10-29 10:57:16.021814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.628 [2024-10-29 10:57:16.062923] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:10.628 [2024-10-29 10:57:16.062991] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.628 [2024-10-29 10:57:16.123518] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.888 00:08:10.888 real 0m0.394s 00:08:10.888 user 0m0.243s 00:08:10.888 sys 0m0.115s 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:10.888 ************************************ 00:08:10.888 END TEST dd_invalid_input_count 00:08:10.888 ************************************ 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:10.888 ************************************ 00:08:10.888 START TEST dd_invalid_output_count 00:08:10.888 ************************************ 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.888 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:10.888 { 00:08:10.888 "subsystems": [ 00:08:10.888 { 00:08:10.888 "subsystem": "bdev", 00:08:10.888 "config": [ 00:08:10.888 { 00:08:10.888 "params": { 00:08:10.888 "block_size": 512, 00:08:10.888 "num_blocks": 512, 00:08:10.888 "name": "malloc0" 00:08:10.888 }, 00:08:10.888 "method": "bdev_malloc_create" 00:08:10.888 }, 00:08:10.888 { 00:08:10.888 "method": "bdev_wait_for_examine" 00:08:10.888 } 00:08:10.888 ] 00:08:10.888 } 00:08:10.888 ] 00:08:10.888 } 00:08:10.888 [2024-10-29 10:57:16.288120] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:10.888 [2024-10-29 10:57:16.288216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74937 ] 00:08:11.148 [2024-10-29 10:57:16.435906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.148 [2024-10-29 10:57:16.454252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.148 [2024-10-29 10:57:16.482632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.148 [2024-10-29 10:57:16.515631] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:11.148 [2024-10-29 10:57:16.515700] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.148 [2024-10-29 10:57:16.574347] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:11.148 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:08:11.148 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.148 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:08:11.148 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:11.148 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:08:11.148 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.148 00:08:11.148 real 0m0.392s 00:08:11.148 user 0m0.245s 00:08:11.148 sys 0m0.101s 00:08:11.148 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.148 10:57:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:11.148 ************************************ 00:08:11.148 END TEST dd_invalid_output_count 00:08:11.148 ************************************ 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:11.407 ************************************ 00:08:11.407 START TEST dd_bs_not_multiple 00:08:11.407 ************************************ 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:11.407 10:57:16 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:11.407 { 00:08:11.407 "subsystems": [ 00:08:11.407 { 00:08:11.407 "subsystem": "bdev", 00:08:11.407 "config": [ 00:08:11.407 { 00:08:11.407 "params": { 00:08:11.407 "block_size": 512, 00:08:11.407 "num_blocks": 512, 00:08:11.407 "name": "malloc0" 00:08:11.407 }, 00:08:11.407 "method": "bdev_malloc_create" 00:08:11.407 }, 00:08:11.407 { 00:08:11.407 "params": { 00:08:11.407 "block_size": 512, 00:08:11.407 "num_blocks": 512, 00:08:11.407 "name": "malloc1" 00:08:11.407 }, 00:08:11.407 "method": "bdev_malloc_create" 00:08:11.407 }, 00:08:11.407 { 00:08:11.407 "method": "bdev_wait_for_examine" 00:08:11.407 } 00:08:11.407 ] 00:08:11.407 } 00:08:11.407 ] 00:08:11.407 } 00:08:11.407 [2024-10-29 10:57:16.732256] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:11.407 [2024-10-29 10:57:16.732338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74963 ] 00:08:11.407 [2024-10-29 10:57:16.881000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.407 [2024-10-29 10:57:16.903154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.666 [2024-10-29 10:57:16.936287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.666 [2024-10-29 10:57:16.977223] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:11.666 [2024-10-29 10:57:16.977290] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.666 [2024-10-29 10:57:17.038179] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:11.666 10:57:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:08:11.666 10:57:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.666 10:57:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:08:11.666 10:57:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:08:11.666 10:57:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:08:11.666 10:57:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.666 00:08:11.666 real 0m0.416s 00:08:11.666 user 0m0.249s 00:08:11.666 sys 0m0.115s 00:08:11.667 10:57:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.667 10:57:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:11.667 ************************************ 00:08:11.667 END TEST dd_bs_not_multiple 00:08:11.667 ************************************ 00:08:11.667 00:08:11.667 real 0m4.996s 00:08:11.667 user 0m2.733s 00:08:11.667 sys 0m1.681s 00:08:11.667 10:57:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.667 10:57:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:11.667 ************************************ 00:08:11.667 END TEST spdk_dd_negative 00:08:11.667 ************************************ 00:08:11.667 00:08:11.667 real 1m2.284s 00:08:11.667 user 0m39.109s 00:08:11.667 sys 0m26.712s 00:08:11.667 10:57:17 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.667 ************************************ 00:08:11.667 END TEST spdk_dd 00:08:11.667 ************************************ 00:08:11.667 10:57:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:11.926 10:57:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:11.926 10:57:17 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:11.926 10:57:17 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:11.926 10:57:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:11.926 10:57:17 -- common/autotest_common.sh@10 -- # set +x 00:08:11.926 10:57:17 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:11.926 10:57:17 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:11.926 10:57:17 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:11.926 10:57:17 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:11.926 10:57:17 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:11.926 10:57:17 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:11.926 10:57:17 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:11.926 10:57:17 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:11.926 10:57:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.926 10:57:17 -- common/autotest_common.sh@10 -- # set +x 00:08:11.926 ************************************ 00:08:11.926 START TEST nvmf_tcp 00:08:11.926 ************************************ 00:08:11.926 10:57:17 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:11.926 * Looking for test storage... 00:08:11.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:11.926 10:57:17 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:11.926 10:57:17 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:11.926 10:57:17 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:11.926 10:57:17 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.926 10:57:17 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.186 10:57:17 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:12.186 10:57:17 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.186 10:57:17 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.186 --rc genhtml_branch_coverage=1 00:08:12.186 --rc genhtml_function_coverage=1 00:08:12.186 --rc genhtml_legend=1 00:08:12.186 --rc geninfo_all_blocks=1 00:08:12.186 --rc geninfo_unexecuted_blocks=1 00:08:12.186 00:08:12.186 ' 00:08:12.186 10:57:17 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.186 --rc genhtml_branch_coverage=1 00:08:12.186 --rc genhtml_function_coverage=1 00:08:12.186 --rc genhtml_legend=1 00:08:12.186 --rc geninfo_all_blocks=1 00:08:12.186 --rc geninfo_unexecuted_blocks=1 00:08:12.186 00:08:12.186 ' 00:08:12.186 10:57:17 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.186 --rc genhtml_branch_coverage=1 00:08:12.186 --rc genhtml_function_coverage=1 00:08:12.186 --rc genhtml_legend=1 00:08:12.186 --rc geninfo_all_blocks=1 00:08:12.186 --rc geninfo_unexecuted_blocks=1 00:08:12.186 00:08:12.186 ' 00:08:12.186 10:57:17 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:12.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.186 --rc genhtml_branch_coverage=1 00:08:12.186 --rc genhtml_function_coverage=1 00:08:12.186 --rc genhtml_legend=1 00:08:12.186 --rc geninfo_all_blocks=1 00:08:12.186 --rc geninfo_unexecuted_blocks=1 00:08:12.186 00:08:12.186 ' 00:08:12.186 10:57:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:12.186 10:57:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:12.186 10:57:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:12.186 10:57:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:12.186 10:57:17 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.186 10:57:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.186 ************************************ 00:08:12.186 START TEST nvmf_target_core 00:08:12.186 ************************************ 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:12.186 * Looking for test storage... 00:08:12.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:12.186 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:12.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.187 --rc genhtml_branch_coverage=1 00:08:12.187 --rc genhtml_function_coverage=1 00:08:12.187 --rc genhtml_legend=1 00:08:12.187 --rc geninfo_all_blocks=1 00:08:12.187 --rc geninfo_unexecuted_blocks=1 00:08:12.187 00:08:12.187 ' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:12.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.187 --rc genhtml_branch_coverage=1 00:08:12.187 --rc genhtml_function_coverage=1 00:08:12.187 --rc genhtml_legend=1 00:08:12.187 --rc geninfo_all_blocks=1 00:08:12.187 --rc geninfo_unexecuted_blocks=1 00:08:12.187 00:08:12.187 ' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:12.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.187 --rc genhtml_branch_coverage=1 00:08:12.187 --rc genhtml_function_coverage=1 00:08:12.187 --rc genhtml_legend=1 00:08:12.187 --rc geninfo_all_blocks=1 00:08:12.187 --rc geninfo_unexecuted_blocks=1 00:08:12.187 00:08:12.187 ' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:12.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.187 --rc genhtml_branch_coverage=1 00:08:12.187 --rc genhtml_function_coverage=1 00:08:12.187 --rc genhtml_legend=1 00:08:12.187 --rc geninfo_all_blocks=1 00:08:12.187 --rc geninfo_unexecuted_blocks=1 00:08:12.187 00:08:12.187 ' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.187 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.187 ************************************ 00:08:12.187 START TEST nvmf_host_management 00:08:12.187 ************************************ 00:08:12.187 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:12.447 * Looking for test storage... 00:08:12.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.447 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:12.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.448 --rc genhtml_branch_coverage=1 00:08:12.448 --rc genhtml_function_coverage=1 00:08:12.448 --rc genhtml_legend=1 00:08:12.448 --rc geninfo_all_blocks=1 00:08:12.448 --rc geninfo_unexecuted_blocks=1 00:08:12.448 00:08:12.448 ' 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:12.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.448 --rc genhtml_branch_coverage=1 00:08:12.448 --rc genhtml_function_coverage=1 00:08:12.448 --rc genhtml_legend=1 00:08:12.448 --rc geninfo_all_blocks=1 00:08:12.448 --rc geninfo_unexecuted_blocks=1 00:08:12.448 00:08:12.448 ' 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:12.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.448 --rc genhtml_branch_coverage=1 00:08:12.448 --rc genhtml_function_coverage=1 00:08:12.448 --rc genhtml_legend=1 00:08:12.448 --rc geninfo_all_blocks=1 00:08:12.448 --rc geninfo_unexecuted_blocks=1 00:08:12.448 00:08:12.448 ' 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:12.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.448 --rc genhtml_branch_coverage=1 00:08:12.448 --rc genhtml_function_coverage=1 00:08:12.448 --rc genhtml_legend=1 00:08:12.448 --rc geninfo_all_blocks=1 00:08:12.448 --rc geninfo_unexecuted_blocks=1 00:08:12.448 00:08:12.448 ' 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.448 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:12.448 Cannot find device "nvmf_init_br" 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:12.448 Cannot find device "nvmf_init_br2" 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:12.448 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:12.448 Cannot find device "nvmf_tgt_br" 00:08:12.449 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:12.449 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.449 Cannot find device "nvmf_tgt_br2" 00:08:12.449 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:12.449 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:12.449 Cannot find device "nvmf_init_br" 00:08:12.449 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:12.449 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:12.708 Cannot find device "nvmf_init_br2" 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:12.708 Cannot find device "nvmf_tgt_br" 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:12.708 Cannot find device "nvmf_tgt_br2" 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:12.708 Cannot find device "nvmf_br" 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:12.708 Cannot find device "nvmf_init_if" 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:12.708 Cannot find device "nvmf_init_if2" 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:12.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:12.708 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:12.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:12.708 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:12.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:12.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:08:12.967 00:08:12.967 --- 10.0.0.3 ping statistics --- 00:08:12.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.967 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:12.967 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:12.967 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:08:12.967 00:08:12.967 --- 10.0.0.4 ping statistics --- 00:08:12.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.967 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:12.967 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:12.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:12.968 00:08:12.968 --- 10.0.0.1 ping statistics --- 00:08:12.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.968 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:12.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:08:12.968 00:08:12.968 --- 10.0.0.2 ping statistics --- 00:08:12.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.968 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=75310 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 75310 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 75310 ']' 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:12.968 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.968 [2024-10-29 10:57:18.438337] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:12.968 [2024-10-29 10:57:18.438643] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.227 [2024-10-29 10:57:18.593864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.227 [2024-10-29 10:57:18.620314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.227 [2024-10-29 10:57:18.620617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.227 [2024-10-29 10:57:18.620834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.227 [2024-10-29 10:57:18.621067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.227 [2024-10-29 10:57:18.621117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.227 [2024-10-29 10:57:18.622232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.227 [2024-10-29 10:57:18.622361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.227 [2024-10-29 10:57:18.622496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:13.227 [2024-10-29 10:57:18.622497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.227 [2024-10-29 10:57:18.658046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.227 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:13.227 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:13.227 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:13.227 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:13.227 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.487 [2024-10-29 10:57:18.751040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.487 Malloc0 00:08:13.487 [2024-10-29 10:57:18.825509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=75351 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 75351 /var/tmp/bdevperf.sock 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 75351 ']' 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.487 { 00:08:13.487 "params": { 00:08:13.487 "name": "Nvme$subsystem", 00:08:13.487 "trtype": "$TEST_TRANSPORT", 00:08:13.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.487 "adrfam": "ipv4", 00:08:13.487 "trsvcid": "$NVMF_PORT", 00:08:13.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.487 "hdgst": ${hdgst:-false}, 00:08:13.487 "ddgst": ${ddgst:-false} 00:08:13.487 }, 00:08:13.487 "method": "bdev_nvme_attach_controller" 00:08:13.487 } 00:08:13.487 EOF 00:08:13.487 )") 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:13.487 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:13.487 "params": { 00:08:13.487 "name": "Nvme0", 00:08:13.487 "trtype": "tcp", 00:08:13.487 "traddr": "10.0.0.3", 00:08:13.487 "adrfam": "ipv4", 00:08:13.487 "trsvcid": "4420", 00:08:13.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:13.487 "hdgst": false, 00:08:13.487 "ddgst": false 00:08:13.487 }, 00:08:13.487 "method": "bdev_nvme_attach_controller" 00:08:13.487 }' 00:08:13.487 [2024-10-29 10:57:18.958444] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:13.487 [2024-10-29 10:57:18.958726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75351 ] 00:08:13.746 [2024-10-29 10:57:19.125553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.747 [2024-10-29 10:57:19.149554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.747 [2024-10-29 10:57:19.191658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.006 Running I/O for 10 seconds... 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.575 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.575 [2024-10-29 10:57:20.020175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.575 [2024-10-29 10:57:20.020355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.575 [2024-10-29 10:57:20.020388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.575 [2024-10-29 10:57:20.020401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.575 [2024-10-29 10:57:20.020412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.575 [2024-10-29 10:57:20.020421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.575 [2024-10-29 10:57:20.020431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:14.575 [2024-10-29 10:57:20.020440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.575 [2024-10-29 10:57:20.020450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dd380 is same with the state(6) to be set 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.575 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:14.575 [2024-10-29 10:57:20.042455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.575 [2024-10-29 10:57:20.042488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.575 [2024-10-29 10:57:20.042509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.575 [2024-10-29 10:57:20.042519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.575 [2024-10-29 10:57:20.042530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.575 [2024-10-29 10:57:20.042538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.575 [2024-10-29 10:57:20.042549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.575 [2024-10-29 10:57:20.042558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.575 [2024-10-29 10:57:20.042568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.575 [2024-10-29 10:57:20.042576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.042984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.042994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.576 [2024-10-29 10:57:20.043485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.576 [2024-10-29 10:57:20.043493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:14.577 [2024-10-29 10:57:20.043815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:14.577 [2024-10-29 10:57:20.043825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446750 is same with the state(6) to be set 00:08:14.577 [2024-10-29 10:57:20.043958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dd380 (9): Bad file descriptor 00:08:14.577 task offset: 16384 on job bdev=Nvme0n1 fails 00:08:14.577 00:08:14.577 Latency(us) 00:08:14.577 [2024-10-29T10:57:20.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.577 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:14.577 Job: Nvme0n1 ended in about 0.75 seconds with error 00:08:14.577 Verification LBA range: start 0x0 length 0x400 00:08:14.577 Nvme0n1 : 0.75 1529.67 95.60 84.98 0.00 38555.15 1884.16 43372.92 00:08:14.577 [2024-10-29T10:57:20.074Z] =================================================================================================================== 00:08:14.577 [2024-10-29T10:57:20.074Z] Total : 1529.67 95.60 84.98 0.00 38555.15 1884.16 43372.92 00:08:14.577 [2024-10-29 10:57:20.045088] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:14.577 [2024-10-29 10:57:20.046942] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.577 [2024-10-29 10:57:20.057853] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 75351 00:08:15.955 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (75351) - No such process 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.955 { 00:08:15.955 "params": { 00:08:15.955 "name": "Nvme$subsystem", 00:08:15.955 "trtype": "$TEST_TRANSPORT", 00:08:15.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.955 "adrfam": "ipv4", 00:08:15.955 "trsvcid": "$NVMF_PORT", 00:08:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.955 "hdgst": ${hdgst:-false}, 00:08:15.955 "ddgst": ${ddgst:-false} 00:08:15.955 }, 00:08:15.955 "method": "bdev_nvme_attach_controller" 00:08:15.955 } 00:08:15.955 EOF 00:08:15.955 )") 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:15.955 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.955 "params": { 00:08:15.955 "name": "Nvme0", 00:08:15.955 "trtype": "tcp", 00:08:15.955 "traddr": "10.0.0.3", 00:08:15.955 "adrfam": "ipv4", 00:08:15.955 "trsvcid": "4420", 00:08:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:15.955 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:15.955 "hdgst": false, 00:08:15.955 "ddgst": false 00:08:15.955 }, 00:08:15.955 "method": "bdev_nvme_attach_controller" 00:08:15.955 }' 00:08:15.955 [2024-10-29 10:57:21.099691] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:15.955 [2024-10-29 10:57:21.099787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75395 ] 00:08:15.955 [2024-10-29 10:57:21.250111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.955 [2024-10-29 10:57:21.269220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.955 [2024-10-29 10:57:21.305486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.955 Running I/O for 1 seconds... 00:08:17.334 1600.00 IOPS, 100.00 MiB/s 00:08:17.334 Latency(us) 00:08:17.335 [2024-10-29T10:57:22.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.335 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:17.335 Verification LBA range: start 0x0 length 0x400 00:08:17.335 Nvme0n1 : 1.03 1622.25 101.39 0.00 0.00 38717.26 3619.37 34317.03 00:08:17.335 [2024-10-29T10:57:22.832Z] =================================================================================================================== 00:08:17.335 [2024-10-29T10:57:22.832Z] Total : 1622.25 101.39 0.00 0.00 38717.26 3619.37 34317.03 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.335 rmmod nvme_tcp 00:08:17.335 rmmod nvme_fabrics 00:08:17.335 rmmod nvme_keyring 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 75310 ']' 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 75310 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 75310 ']' 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 75310 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75310 00:08:17.335 killing process with pid 75310 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75310' 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 75310 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 75310 00:08:17.335 [2024-10-29 10:57:22.785670] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:17.335 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:17.594 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:17.594 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:17.594 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:17.594 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.594 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.594 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.594 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:17.594 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:17.594 00:08:17.594 real 0m5.413s 00:08:17.594 user 0m19.340s 00:08:17.594 sys 0m1.505s 00:08:17.594 ************************************ 00:08:17.594 END TEST nvmf_host_management 00:08:17.594 ************************************ 00:08:17.594 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.594 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.855 ************************************ 00:08:17.855 START TEST nvmf_lvol 00:08:17.855 ************************************ 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:17.855 * Looking for test storage... 00:08:17.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.855 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:17.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.855 --rc genhtml_branch_coverage=1 00:08:17.855 --rc genhtml_function_coverage=1 00:08:17.855 --rc genhtml_legend=1 00:08:17.855 --rc geninfo_all_blocks=1 00:08:17.855 --rc geninfo_unexecuted_blocks=1 00:08:17.855 00:08:17.855 ' 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:17.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.856 --rc genhtml_branch_coverage=1 00:08:17.856 --rc genhtml_function_coverage=1 00:08:17.856 --rc genhtml_legend=1 00:08:17.856 --rc geninfo_all_blocks=1 00:08:17.856 --rc geninfo_unexecuted_blocks=1 00:08:17.856 00:08:17.856 ' 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:17.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.856 --rc genhtml_branch_coverage=1 00:08:17.856 --rc genhtml_function_coverage=1 00:08:17.856 --rc genhtml_legend=1 00:08:17.856 --rc geninfo_all_blocks=1 00:08:17.856 --rc geninfo_unexecuted_blocks=1 00:08:17.856 00:08:17.856 ' 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:17.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.856 --rc genhtml_branch_coverage=1 00:08:17.856 --rc genhtml_function_coverage=1 00:08:17.856 --rc genhtml_legend=1 00:08:17.856 --rc geninfo_all_blocks=1 00:08:17.856 --rc geninfo_unexecuted_blocks=1 00:08:17.856 00:08:17.856 ' 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.856 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:17.856 Cannot find device "nvmf_init_br" 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:17.856 Cannot find device "nvmf_init_br2" 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:17.856 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:18.116 Cannot find device "nvmf_tgt_br" 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.116 Cannot find device "nvmf_tgt_br2" 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:18.116 Cannot find device "nvmf_init_br" 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:18.116 Cannot find device "nvmf_init_br2" 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:18.116 Cannot find device "nvmf_tgt_br" 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:18.116 Cannot find device "nvmf_tgt_br2" 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:18.116 Cannot find device "nvmf_br" 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:18.116 Cannot find device "nvmf_init_if" 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:18.116 Cannot find device "nvmf_init_if2" 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:18.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:18.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:18.116 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:18.117 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:18.117 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:18.117 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:18.117 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:18.117 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:18.117 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:18.117 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:18.117 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:18.117 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:18.376 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:18.376 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:18.376 00:08:18.376 --- 10.0.0.3 ping statistics --- 00:08:18.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.376 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:18.376 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:18.376 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:08:18.376 00:08:18.376 --- 10.0.0.4 ping statistics --- 00:08:18.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.376 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:18.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:18.376 00:08:18.376 --- 10.0.0.1 ping statistics --- 00:08:18.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.376 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:18.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:08:18.376 00:08:18.376 --- 10.0.0.2 ping statistics --- 00:08:18.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.376 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:18.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=75660 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 75660 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 75660 ']' 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:18.376 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:18.376 [2024-10-29 10:57:23.789613] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:18.376 [2024-10-29 10:57:23.789853] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.635 [2024-10-29 10:57:23.927483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.635 [2024-10-29 10:57:23.945794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.635 [2024-10-29 10:57:23.945847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.635 [2024-10-29 10:57:23.945873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.635 [2024-10-29 10:57:23.945880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.635 [2024-10-29 10:57:23.945886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.635 [2024-10-29 10:57:23.946544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.635 [2024-10-29 10:57:23.946850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.635 [2024-10-29 10:57:23.946861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.635 [2024-10-29 10:57:23.977046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.635 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.635 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:08:18.635 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:18.635 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:18.635 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:18.635 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.635 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:18.894 [2024-10-29 10:57:24.386335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.153 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:19.412 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:19.412 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:19.670 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:19.670 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:19.929 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:20.188 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=62838b7f-9707-4289-b5e0-8237442013e6 00:08:20.188 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62838b7f-9707-4289-b5e0-8237442013e6 lvol 20 00:08:20.448 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=867e06fa-e4ea-4883-ae50-3749d43096e9 00:08:20.448 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:20.706 10:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 867e06fa-e4ea-4883-ae50-3749d43096e9 00:08:20.965 10:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:21.224 [2024-10-29 10:57:26.618644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:21.224 10:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:21.483 10:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=75728 00:08:21.483 10:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:21.483 10:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:22.421 10:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 867e06fa-e4ea-4883-ae50-3749d43096e9 MY_SNAPSHOT 00:08:23.018 10:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ac20876c-5cbe-4b87-9ccf-45ecf096b230 00:08:23.018 10:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 867e06fa-e4ea-4883-ae50-3749d43096e9 30 00:08:23.018 10:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone ac20876c-5cbe-4b87-9ccf-45ecf096b230 MY_CLONE 00:08:23.278 10:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5c49a307-25a7-4baf-a851-cc4a0d17c17a 00:08:23.278 10:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 5c49a307-25a7-4baf-a851-cc4a0d17c17a 00:08:23.843 10:57:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 75728 00:08:31.960 Initializing NVMe Controllers 00:08:31.960 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:31.960 Controller IO queue size 128, less than required. 00:08:31.960 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.960 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:31.960 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:31.960 Initialization complete. Launching workers. 00:08:31.960 ======================================================== 00:08:31.960 Latency(us) 00:08:31.960 Device Information : IOPS MiB/s Average min max 00:08:31.960 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10999.11 42.97 11638.27 2224.28 63800.68 00:08:31.960 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11063.51 43.22 11568.65 1340.70 57236.36 00:08:31.960 ======================================================== 00:08:31.960 Total : 22062.61 86.18 11603.35 1340.70 63800.68 00:08:31.960 00:08:31.960 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:32.218 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 867e06fa-e4ea-4883-ae50-3749d43096e9 00:08:32.218 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62838b7f-9707-4289-b5e0-8237442013e6 00:08:32.479 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:32.479 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:32.479 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:32.479 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.479 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:32.738 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.738 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:32.738 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.738 10:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.738 rmmod nvme_tcp 00:08:32.738 rmmod nvme_fabrics 00:08:32.738 rmmod nvme_keyring 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 75660 ']' 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 75660 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 75660 ']' 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 75660 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75660 00:08:32.738 killing process with pid 75660 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75660' 00:08:32.738 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 75660 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 75660 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:32.739 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.998 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.258 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:33.258 00:08:33.258 real 0m15.380s 00:08:33.258 user 1m3.780s 00:08:33.258 sys 0m4.186s 00:08:33.258 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.258 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:33.258 ************************************ 00:08:33.258 END TEST nvmf_lvol 00:08:33.258 ************************************ 00:08:33.258 10:57:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:33.258 10:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:33.258 10:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.258 10:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.258 ************************************ 00:08:33.258 START TEST nvmf_lvs_grow 00:08:33.259 ************************************ 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:33.259 * Looking for test storage... 00:08:33.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:33.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.259 --rc genhtml_branch_coverage=1 00:08:33.259 --rc genhtml_function_coverage=1 00:08:33.259 --rc genhtml_legend=1 00:08:33.259 --rc geninfo_all_blocks=1 00:08:33.259 --rc geninfo_unexecuted_blocks=1 00:08:33.259 00:08:33.259 ' 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:33.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.259 --rc genhtml_branch_coverage=1 00:08:33.259 --rc genhtml_function_coverage=1 00:08:33.259 --rc genhtml_legend=1 00:08:33.259 --rc geninfo_all_blocks=1 00:08:33.259 --rc geninfo_unexecuted_blocks=1 00:08:33.259 00:08:33.259 ' 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:33.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.259 --rc genhtml_branch_coverage=1 00:08:33.259 --rc genhtml_function_coverage=1 00:08:33.259 --rc genhtml_legend=1 00:08:33.259 --rc geninfo_all_blocks=1 00:08:33.259 --rc geninfo_unexecuted_blocks=1 00:08:33.259 00:08:33.259 ' 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:33.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.259 --rc genhtml_branch_coverage=1 00:08:33.259 --rc genhtml_function_coverage=1 00:08:33.259 --rc genhtml_legend=1 00:08:33.259 --rc geninfo_all_blocks=1 00:08:33.259 --rc geninfo_unexecuted_blocks=1 00:08:33.259 00:08:33.259 ' 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.259 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.260 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.260 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:33.520 Cannot find device "nvmf_init_br" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:33.520 Cannot find device "nvmf_init_br2" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:33.520 Cannot find device "nvmf_tgt_br" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.520 Cannot find device "nvmf_tgt_br2" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:33.520 Cannot find device "nvmf_init_br" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:33.520 Cannot find device "nvmf_init_br2" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:33.520 Cannot find device "nvmf_tgt_br" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:33.520 Cannot find device "nvmf_tgt_br2" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:33.520 Cannot find device "nvmf_br" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:33.520 Cannot find device "nvmf_init_if" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:33.520 Cannot find device "nvmf_init_if2" 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:33.520 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:33.520 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.520 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.779 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.779 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:33.780 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.780 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:08:33.780 00:08:33.780 --- 10.0.0.3 ping statistics --- 00:08:33.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.780 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:33.780 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:33.780 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:08:33.780 00:08:33.780 --- 10.0.0.4 ping statistics --- 00:08:33.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.780 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:33.780 00:08:33.780 --- 10.0.0.1 ping statistics --- 00:08:33.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.780 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:33.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:08:33.780 00:08:33.780 --- 10.0.0.2 ping statistics --- 00:08:33.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.780 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=76109 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 76109 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 76109 ']' 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:33.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:33.780 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.780 [2024-10-29 10:57:39.211612] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:33.780 [2024-10-29 10:57:39.212293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.039 [2024-10-29 10:57:39.367581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.039 [2024-10-29 10:57:39.390865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.039 [2024-10-29 10:57:39.390923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.039 [2024-10-29 10:57:39.390945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.039 [2024-10-29 10:57:39.390955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.039 [2024-10-29 10:57:39.390964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.039 [2024-10-29 10:57:39.391321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.039 [2024-10-29 10:57:39.425683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.039 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:34.039 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:08:34.039 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:34.039 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:34.039 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:34.039 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.039 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:34.298 [2024-10-29 10:57:39.723979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:34.298 ************************************ 00:08:34.298 START TEST lvs_grow_clean 00:08:34.298 ************************************ 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:34.298 10:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:34.865 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:34.865 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:34.865 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:34.865 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:34.866 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:35.434 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:35.434 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:35.434 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 lvol 150 00:08:35.434 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1a5e19e4-11dd-41ec-85ac-433f875a0e66 00:08:35.434 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.434 10:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:35.693 [2024-10-29 10:57:41.127016] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:35.693 [2024-10-29 10:57:41.127110] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:35.693 true 00:08:35.693 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:35.693 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:35.952 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:35.952 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:36.211 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a5e19e4-11dd-41ec-85ac-433f875a0e66 00:08:36.469 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:36.727 [2024-10-29 10:57:42.115504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:36.727 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:36.986 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76185 00:08:36.986 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:36.986 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:36.986 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76185 /var/tmp/bdevperf.sock 00:08:36.986 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 76185 ']' 00:08:36.986 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:36.986 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:36.986 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:36.986 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.986 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:36.986 [2024-10-29 10:57:42.472351] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:36.986 [2024-10-29 10:57:42.472463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76185 ] 00:08:37.245 [2024-10-29 10:57:42.624053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.245 [2024-10-29 10:57:42.648606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.245 [2024-10-29 10:57:42.682232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.245 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.245 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:08:37.245 10:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:37.811 Nvme0n1 00:08:37.811 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:38.070 [ 00:08:38.070 { 00:08:38.070 "name": "Nvme0n1", 00:08:38.070 "aliases": [ 00:08:38.070 "1a5e19e4-11dd-41ec-85ac-433f875a0e66" 00:08:38.070 ], 00:08:38.070 "product_name": "NVMe disk", 00:08:38.070 "block_size": 4096, 00:08:38.070 "num_blocks": 38912, 00:08:38.070 "uuid": "1a5e19e4-11dd-41ec-85ac-433f875a0e66", 00:08:38.070 "numa_id": -1, 00:08:38.070 "assigned_rate_limits": { 00:08:38.070 "rw_ios_per_sec": 0, 00:08:38.070 "rw_mbytes_per_sec": 0, 00:08:38.070 "r_mbytes_per_sec": 0, 00:08:38.070 "w_mbytes_per_sec": 0 00:08:38.070 }, 00:08:38.070 "claimed": false, 00:08:38.070 "zoned": false, 00:08:38.070 "supported_io_types": { 00:08:38.070 "read": true, 00:08:38.070 "write": true, 00:08:38.070 "unmap": true, 00:08:38.070 "flush": true, 00:08:38.070 "reset": true, 00:08:38.070 "nvme_admin": true, 00:08:38.070 "nvme_io": true, 00:08:38.070 "nvme_io_md": false, 00:08:38.070 "write_zeroes": true, 00:08:38.070 "zcopy": false, 00:08:38.070 "get_zone_info": false, 00:08:38.070 "zone_management": false, 00:08:38.070 "zone_append": false, 00:08:38.070 "compare": true, 00:08:38.070 "compare_and_write": true, 00:08:38.070 "abort": true, 00:08:38.070 "seek_hole": false, 00:08:38.070 "seek_data": false, 00:08:38.070 "copy": true, 00:08:38.070 "nvme_iov_md": false 00:08:38.070 }, 00:08:38.070 "memory_domains": [ 00:08:38.070 { 00:08:38.070 "dma_device_id": "system", 00:08:38.070 "dma_device_type": 1 00:08:38.070 } 00:08:38.070 ], 00:08:38.070 "driver_specific": { 00:08:38.070 "nvme": [ 00:08:38.070 { 00:08:38.070 "trid": { 00:08:38.070 "trtype": "TCP", 00:08:38.070 "adrfam": "IPv4", 00:08:38.070 "traddr": "10.0.0.3", 00:08:38.070 "trsvcid": "4420", 00:08:38.070 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:38.070 }, 00:08:38.070 "ctrlr_data": { 00:08:38.070 "cntlid": 1, 00:08:38.070 "vendor_id": "0x8086", 00:08:38.070 "model_number": "SPDK bdev Controller", 00:08:38.070 "serial_number": "SPDK0", 00:08:38.070 "firmware_revision": "25.01", 00:08:38.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:38.070 "oacs": { 00:08:38.070 "security": 0, 00:08:38.070 "format": 0, 00:08:38.070 "firmware": 0, 00:08:38.070 "ns_manage": 0 00:08:38.070 }, 00:08:38.070 "multi_ctrlr": true, 00:08:38.070 "ana_reporting": false 00:08:38.070 }, 00:08:38.070 "vs": { 00:08:38.070 "nvme_version": "1.3" 00:08:38.070 }, 00:08:38.070 "ns_data": { 00:08:38.070 "id": 1, 00:08:38.070 "can_share": true 00:08:38.070 } 00:08:38.070 } 00:08:38.070 ], 00:08:38.070 "mp_policy": "active_passive" 00:08:38.070 } 00:08:38.070 } 00:08:38.070 ] 00:08:38.070 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:38.070 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76200 00:08:38.070 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:38.070 Running I/O for 10 seconds... 00:08:39.006 Latency(us) 00:08:39.006 [2024-10-29T10:57:44.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.006 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:39.006 [2024-10-29T10:57:44.503Z] =================================================================================================================== 00:08:39.006 [2024-10-29T10:57:44.503Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:39.006 00:08:39.942 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:40.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.200 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:40.200 [2024-10-29T10:57:45.697Z] =================================================================================================================== 00:08:40.200 [2024-10-29T10:57:45.697Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:40.200 00:08:40.200 true 00:08:40.200 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:40.200 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:40.766 10:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:40.766 10:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:40.766 10:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 76200 00:08:41.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.024 Nvme0n1 : 3.00 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:08:41.024 [2024-10-29T10:57:46.521Z] =================================================================================================================== 00:08:41.024 [2024-10-29T10:57:46.521Z] Total : 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:08:41.024 00:08:41.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.991 Nvme0n1 : 4.00 6635.75 25.92 0.00 0.00 0.00 0.00 0.00 00:08:41.991 [2024-10-29T10:57:47.488Z] =================================================================================================================== 00:08:41.991 [2024-10-29T10:57:47.488Z] Total : 6635.75 25.92 0.00 0.00 0.00 0.00 0.00 00:08:41.991 00:08:43.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.366 Nvme0n1 : 5.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:43.366 [2024-10-29T10:57:48.863Z] =================================================================================================================== 00:08:43.366 [2024-10-29T10:57:48.863Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:43.366 00:08:44.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.305 Nvme0n1 : 6.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:08:44.305 [2024-10-29T10:57:49.802Z] =================================================================================================================== 00:08:44.305 [2024-10-29T10:57:49.802Z] Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:08:44.305 00:08:45.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.241 Nvme0n1 : 7.00 6567.71 25.66 0.00 0.00 0.00 0.00 0.00 00:08:45.241 [2024-10-29T10:57:50.738Z] =================================================================================================================== 00:08:45.241 [2024-10-29T10:57:50.738Z] Total : 6567.71 25.66 0.00 0.00 0.00 0.00 0.00 00:08:45.241 00:08:46.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.178 Nvme0n1 : 8.00 6556.38 25.61 0.00 0.00 0.00 0.00 0.00 00:08:46.178 [2024-10-29T10:57:51.675Z] =================================================================================================================== 00:08:46.178 [2024-10-29T10:57:51.675Z] Total : 6556.38 25.61 0.00 0.00 0.00 0.00 0.00 00:08:46.178 00:08:47.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.116 Nvme0n1 : 9.00 6430.56 25.12 0.00 0.00 0.00 0.00 0.00 00:08:47.116 [2024-10-29T10:57:52.613Z] =================================================================================================================== 00:08:47.116 [2024-10-29T10:57:52.613Z] Total : 6430.56 25.12 0.00 0.00 0.00 0.00 0.00 00:08:47.116 00:08:48.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.052 Nvme0n1 : 10.00 6409.80 25.04 0.00 0.00 0.00 0.00 0.00 00:08:48.052 [2024-10-29T10:57:53.549Z] =================================================================================================================== 00:08:48.052 [2024-10-29T10:57:53.549Z] Total : 6409.80 25.04 0.00 0.00 0.00 0.00 0.00 00:08:48.052 00:08:48.052 00:08:48.052 Latency(us) 00:08:48.052 [2024-10-29T10:57:53.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.052 Nvme0n1 : 10.01 6413.62 25.05 0.00 0.00 19949.21 16443.58 165865.66 00:08:48.052 [2024-10-29T10:57:53.549Z] =================================================================================================================== 00:08:48.052 [2024-10-29T10:57:53.549Z] Total : 6413.62 25.05 0.00 0.00 19949.21 16443.58 165865.66 00:08:48.052 { 00:08:48.052 "results": [ 00:08:48.052 { 00:08:48.052 "job": "Nvme0n1", 00:08:48.052 "core_mask": "0x2", 00:08:48.052 "workload": "randwrite", 00:08:48.052 "status": "finished", 00:08:48.052 "queue_depth": 128, 00:08:48.052 "io_size": 4096, 00:08:48.052 "runtime": 10.013994, 00:08:48.052 "iops": 6413.624773491975, 00:08:48.052 "mibps": 25.05322177145303, 00:08:48.052 "io_failed": 0, 00:08:48.052 "io_timeout": 0, 00:08:48.052 "avg_latency_us": 19949.211423524317, 00:08:48.052 "min_latency_us": 16443.578181818182, 00:08:48.052 "max_latency_us": 165865.65818181817 00:08:48.052 } 00:08:48.052 ], 00:08:48.052 "core_count": 1 00:08:48.052 } 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76185 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 76185 ']' 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 76185 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76185 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:48.052 killing process with pid 76185 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76185' 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 76185 00:08:48.052 Received shutdown signal, test time was about 10.000000 seconds 00:08:48.052 00:08:48.052 Latency(us) 00:08:48.052 [2024-10-29T10:57:53.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.052 [2024-10-29T10:57:53.549Z] =================================================================================================================== 00:08:48.052 [2024-10-29T10:57:53.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:48.052 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 76185 00:08:48.311 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:48.570 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:48.829 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:48.829 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:49.089 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:49.089 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:49.089 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:49.348 [2024-10-29 10:57:54.753034] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:49.348 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:49.607 request: 00:08:49.607 { 00:08:49.607 "uuid": "0e660f51-b9a2-41f4-9a8e-a73e79c07463", 00:08:49.607 "method": "bdev_lvol_get_lvstores", 00:08:49.607 "req_id": 1 00:08:49.607 } 00:08:49.607 Got JSON-RPC error response 00:08:49.607 response: 00:08:49.607 { 00:08:49.607 "code": -19, 00:08:49.607 "message": "No such device" 00:08:49.607 } 00:08:49.607 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:49.607 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:49.607 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:49.607 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:49.607 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.866 aio_bdev 00:08:49.866 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1a5e19e4-11dd-41ec-85ac-433f875a0e66 00:08:49.866 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=1a5e19e4-11dd-41ec-85ac-433f875a0e66 00:08:49.866 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:49.866 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:08:49.866 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:49.866 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:49.866 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:50.125 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1a5e19e4-11dd-41ec-85ac-433f875a0e66 -t 2000 00:08:50.385 [ 00:08:50.385 { 00:08:50.385 "name": "1a5e19e4-11dd-41ec-85ac-433f875a0e66", 00:08:50.385 "aliases": [ 00:08:50.385 "lvs/lvol" 00:08:50.385 ], 00:08:50.385 "product_name": "Logical Volume", 00:08:50.385 "block_size": 4096, 00:08:50.385 "num_blocks": 38912, 00:08:50.385 "uuid": "1a5e19e4-11dd-41ec-85ac-433f875a0e66", 00:08:50.385 "assigned_rate_limits": { 00:08:50.385 "rw_ios_per_sec": 0, 00:08:50.385 "rw_mbytes_per_sec": 0, 00:08:50.385 "r_mbytes_per_sec": 0, 00:08:50.385 "w_mbytes_per_sec": 0 00:08:50.385 }, 00:08:50.385 "claimed": false, 00:08:50.385 "zoned": false, 00:08:50.385 "supported_io_types": { 00:08:50.385 "read": true, 00:08:50.385 "write": true, 00:08:50.385 "unmap": true, 00:08:50.385 "flush": false, 00:08:50.385 "reset": true, 00:08:50.385 "nvme_admin": false, 00:08:50.385 "nvme_io": false, 00:08:50.385 "nvme_io_md": false, 00:08:50.385 "write_zeroes": true, 00:08:50.385 "zcopy": false, 00:08:50.385 "get_zone_info": false, 00:08:50.385 "zone_management": false, 00:08:50.385 "zone_append": false, 00:08:50.385 "compare": false, 00:08:50.385 "compare_and_write": false, 00:08:50.385 "abort": false, 00:08:50.385 "seek_hole": true, 00:08:50.385 "seek_data": true, 00:08:50.385 "copy": false, 00:08:50.385 "nvme_iov_md": false 00:08:50.385 }, 00:08:50.385 "driver_specific": { 00:08:50.385 "lvol": { 00:08:50.385 "lvol_store_uuid": "0e660f51-b9a2-41f4-9a8e-a73e79c07463", 00:08:50.385 "base_bdev": "aio_bdev", 00:08:50.385 "thin_provision": false, 00:08:50.385 "num_allocated_clusters": 38, 00:08:50.385 "snapshot": false, 00:08:50.385 "clone": false, 00:08:50.385 "esnap_clone": false 00:08:50.385 } 00:08:50.385 } 00:08:50.385 } 00:08:50.385 ] 00:08:50.385 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:08:50.385 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:50.385 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:50.953 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:50.953 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:50.953 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:50.953 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:50.953 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1a5e19e4-11dd-41ec-85ac-433f875a0e66 00:08:51.212 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e660f51-b9a2-41f4-9a8e-a73e79c07463 00:08:51.781 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:52.040 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:52.299 ************************************ 00:08:52.299 END TEST lvs_grow_clean 00:08:52.299 ************************************ 00:08:52.299 00:08:52.299 real 0m17.963s 00:08:52.299 user 0m16.868s 00:08:52.299 sys 0m2.419s 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:52.299 ************************************ 00:08:52.299 START TEST lvs_grow_dirty 00:08:52.299 ************************************ 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:52.299 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:52.906 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:52.906 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:53.165 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b0c19470-6c01-4176-9529-8f91ff6a1648 00:08:53.166 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:08:53.166 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:53.426 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:53.426 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:53.426 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b0c19470-6c01-4176-9529-8f91ff6a1648 lvol 150 00:08:53.686 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=91451f76-3bbb-4caf-9ad9-9c37dc836347 00:08:53.686 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.686 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:53.946 [2024-10-29 10:57:59.204273] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:53.946 [2024-10-29 10:57:59.204338] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:53.946 true 00:08:53.946 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:53.946 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:08:54.212 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:54.212 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:54.471 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 91451f76-3bbb-4caf-9ad9-9c37dc836347 00:08:54.731 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:54.991 [2024-10-29 10:58:00.389125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:54.991 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:55.250 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:55.250 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76454 00:08:55.250 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:55.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:55.250 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76454 /var/tmp/bdevperf.sock 00:08:55.250 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 76454 ']' 00:08:55.250 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:55.250 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.250 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:55.250 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.250 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:55.250 [2024-10-29 10:58:00.685492] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:08:55.250 [2024-10-29 10:58:00.685762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76454 ] 00:08:55.510 [2024-10-29 10:58:00.827969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.510 [2024-10-29 10:58:00.847862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.510 [2024-10-29 10:58:00.878174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.510 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:55.510 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:55.510 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:55.770 Nvme0n1 00:08:55.770 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:56.028 [ 00:08:56.028 { 00:08:56.028 "name": "Nvme0n1", 00:08:56.028 "aliases": [ 00:08:56.028 "91451f76-3bbb-4caf-9ad9-9c37dc836347" 00:08:56.028 ], 00:08:56.028 "product_name": "NVMe disk", 00:08:56.028 "block_size": 4096, 00:08:56.028 "num_blocks": 38912, 00:08:56.028 "uuid": "91451f76-3bbb-4caf-9ad9-9c37dc836347", 00:08:56.028 "numa_id": -1, 00:08:56.028 "assigned_rate_limits": { 00:08:56.028 "rw_ios_per_sec": 0, 00:08:56.028 "rw_mbytes_per_sec": 0, 00:08:56.028 "r_mbytes_per_sec": 0, 00:08:56.028 "w_mbytes_per_sec": 0 00:08:56.028 }, 00:08:56.028 "claimed": false, 00:08:56.028 "zoned": false, 00:08:56.029 "supported_io_types": { 00:08:56.029 "read": true, 00:08:56.029 "write": true, 00:08:56.029 "unmap": true, 00:08:56.029 "flush": true, 00:08:56.029 "reset": true, 00:08:56.029 "nvme_admin": true, 00:08:56.029 "nvme_io": true, 00:08:56.029 "nvme_io_md": false, 00:08:56.029 "write_zeroes": true, 00:08:56.029 "zcopy": false, 00:08:56.029 "get_zone_info": false, 00:08:56.029 "zone_management": false, 00:08:56.029 "zone_append": false, 00:08:56.029 "compare": true, 00:08:56.029 "compare_and_write": true, 00:08:56.029 "abort": true, 00:08:56.029 "seek_hole": false, 00:08:56.029 "seek_data": false, 00:08:56.029 "copy": true, 00:08:56.029 "nvme_iov_md": false 00:08:56.029 }, 00:08:56.029 "memory_domains": [ 00:08:56.029 { 00:08:56.029 "dma_device_id": "system", 00:08:56.029 "dma_device_type": 1 00:08:56.029 } 00:08:56.029 ], 00:08:56.029 "driver_specific": { 00:08:56.029 "nvme": [ 00:08:56.029 { 00:08:56.029 "trid": { 00:08:56.029 "trtype": "TCP", 00:08:56.029 "adrfam": "IPv4", 00:08:56.029 "traddr": "10.0.0.3", 00:08:56.029 "trsvcid": "4420", 00:08:56.029 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:56.029 }, 00:08:56.029 "ctrlr_data": { 00:08:56.029 "cntlid": 1, 00:08:56.029 "vendor_id": "0x8086", 00:08:56.029 "model_number": "SPDK bdev Controller", 00:08:56.029 "serial_number": "SPDK0", 00:08:56.029 "firmware_revision": "25.01", 00:08:56.029 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:56.029 "oacs": { 00:08:56.029 "security": 0, 00:08:56.029 "format": 0, 00:08:56.029 "firmware": 0, 00:08:56.029 "ns_manage": 0 00:08:56.029 }, 00:08:56.029 "multi_ctrlr": true, 00:08:56.029 "ana_reporting": false 00:08:56.029 }, 00:08:56.029 "vs": { 00:08:56.029 "nvme_version": "1.3" 00:08:56.029 }, 00:08:56.029 "ns_data": { 00:08:56.029 "id": 1, 00:08:56.029 "can_share": true 00:08:56.029 } 00:08:56.029 } 00:08:56.029 ], 00:08:56.029 "mp_policy": "active_passive" 00:08:56.029 } 00:08:56.029 } 00:08:56.029 ] 00:08:56.029 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:56.029 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76470 00:08:56.029 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:56.288 Running I/O for 10 seconds... 00:08:57.223 Latency(us) 00:08:57.223 [2024-10-29T10:58:02.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.223 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:57.223 [2024-10-29T10:58:02.720Z] =================================================================================================================== 00:08:57.223 [2024-10-29T10:58:02.720Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:57.223 00:08:58.159 10:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:08:58.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.159 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:58.159 [2024-10-29T10:58:03.656Z] =================================================================================================================== 00:08:58.159 [2024-10-29T10:58:03.656Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:58.159 00:08:58.419 true 00:08:58.419 10:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:08:58.419 10:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:58.987 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:58.987 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:58.987 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 76470 00:08:59.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.246 Nvme0n1 : 3.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:08:59.246 [2024-10-29T10:58:04.743Z] =================================================================================================================== 00:08:59.246 [2024-10-29T10:58:04.743Z] Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:08:59.246 00:09:00.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.184 Nvme0n1 : 4.00 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:00.184 [2024-10-29T10:58:05.681Z] =================================================================================================================== 00:09:00.184 [2024-10-29T10:58:05.681Z] Total : 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:00.184 00:09:01.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.562 Nvme0n1 : 5.00 6578.60 25.70 0.00 0.00 0.00 0.00 0.00 00:09:01.562 [2024-10-29T10:58:07.059Z] =================================================================================================================== 00:09:01.562 [2024-10-29T10:58:07.059Z] Total : 6578.60 25.70 0.00 0.00 0.00 0.00 0.00 00:09:01.562 00:09:02.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.141 Nvme0n1 : 6.00 6582.83 25.71 0.00 0.00 0.00 0.00 0.00 00:09:02.141 [2024-10-29T10:58:07.638Z] =================================================================================================================== 00:09:02.141 [2024-10-29T10:58:07.638Z] Total : 6582.83 25.71 0.00 0.00 0.00 0.00 0.00 00:09:02.141 00:09:03.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.547 Nvme0n1 : 7.00 6549.57 25.58 0.00 0.00 0.00 0.00 0.00 00:09:03.547 [2024-10-29T10:58:09.044Z] =================================================================================================================== 00:09:03.547 [2024-10-29T10:58:09.044Z] Total : 6549.57 25.58 0.00 0.00 0.00 0.00 0.00 00:09:03.547 00:09:04.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.484 Nvme0n1 : 8.00 6462.50 25.24 0.00 0.00 0.00 0.00 0.00 00:09:04.484 [2024-10-29T10:58:09.981Z] =================================================================================================================== 00:09:04.484 [2024-10-29T10:58:09.981Z] Total : 6462.50 25.24 0.00 0.00 0.00 0.00 0.00 00:09:04.484 00:09:05.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.421 Nvme0n1 : 9.00 6442.67 25.17 0.00 0.00 0.00 0.00 0.00 00:09:05.421 [2024-10-29T10:58:10.918Z] =================================================================================================================== 00:09:05.421 [2024-10-29T10:58:10.918Z] Total : 6442.67 25.17 0.00 0.00 0.00 0.00 0.00 00:09:05.421 00:09:06.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.357 Nvme0n1 : 10.00 6420.70 25.08 0.00 0.00 0.00 0.00 0.00 00:09:06.357 [2024-10-29T10:58:11.855Z] =================================================================================================================== 00:09:06.358 [2024-10-29T10:58:11.855Z] Total : 6420.70 25.08 0.00 0.00 0.00 0.00 0.00 00:09:06.358 00:09:06.358 00:09:06.358 Latency(us) 00:09:06.358 [2024-10-29T10:58:11.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.358 Nvme0n1 : 10.01 6424.43 25.10 0.00 0.00 19917.69 4021.53 148707.14 00:09:06.358 [2024-10-29T10:58:11.855Z] =================================================================================================================== 00:09:06.358 [2024-10-29T10:58:11.855Z] Total : 6424.43 25.10 0.00 0.00 19917.69 4021.53 148707.14 00:09:06.358 { 00:09:06.358 "results": [ 00:09:06.358 { 00:09:06.358 "job": "Nvme0n1", 00:09:06.358 "core_mask": "0x2", 00:09:06.358 "workload": "randwrite", 00:09:06.358 "status": "finished", 00:09:06.358 "queue_depth": 128, 00:09:06.358 "io_size": 4096, 00:09:06.358 "runtime": 10.014122, 00:09:06.358 "iops": 6424.427423592403, 00:09:06.358 "mibps": 25.095419623407825, 00:09:06.358 "io_failed": 0, 00:09:06.358 "io_timeout": 0, 00:09:06.358 "avg_latency_us": 19917.68790509902, 00:09:06.358 "min_latency_us": 4021.5272727272727, 00:09:06.358 "max_latency_us": 148707.14181818182 00:09:06.358 } 00:09:06.358 ], 00:09:06.358 "core_count": 1 00:09:06.358 } 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76454 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 76454 ']' 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 76454 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76454 00:09:06.358 killing process with pid 76454 00:09:06.358 Received shutdown signal, test time was about 10.000000 seconds 00:09:06.358 00:09:06.358 Latency(us) 00:09:06.358 [2024-10-29T10:58:11.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.358 [2024-10-29T10:58:11.855Z] =================================================================================================================== 00:09:06.358 [2024-10-29T10:58:11.855Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76454' 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 76454 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 76454 00:09:06.358 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:06.927 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:07.187 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:09:07.187 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 76109 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 76109 00:09:07.448 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 76109 Killed "${NVMF_APP[@]}" "$@" 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=76603 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 76603 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 76603 ']' 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:07.448 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:07.448 [2024-10-29 10:58:12.853050] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:07.448 [2024-10-29 10:58:12.853145] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.708 [2024-10-29 10:58:13.002963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.708 [2024-10-29 10:58:13.024889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.708 [2024-10-29 10:58:13.024971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.708 [2024-10-29 10:58:13.024984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.708 [2024-10-29 10:58:13.024993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.708 [2024-10-29 10:58:13.025000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.708 [2024-10-29 10:58:13.025299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.708 [2024-10-29 10:58:13.058019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.708 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:07.708 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:09:07.708 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.708 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.708 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:07.708 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.708 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.967 [2024-10-29 10:58:13.423385] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:07.967 [2024-10-29 10:58:13.423842] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:07.967 [2024-10-29 10:58:13.424147] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:08.226 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:08.226 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 91451f76-3bbb-4caf-9ad9-9c37dc836347 00:09:08.226 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=91451f76-3bbb-4caf-9ad9-9c37dc836347 00:09:08.226 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:08.227 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:08.227 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:08.227 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:08.227 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:08.486 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 91451f76-3bbb-4caf-9ad9-9c37dc836347 -t 2000 00:09:08.746 [ 00:09:08.746 { 00:09:08.746 "name": "91451f76-3bbb-4caf-9ad9-9c37dc836347", 00:09:08.746 "aliases": [ 00:09:08.746 "lvs/lvol" 00:09:08.746 ], 00:09:08.746 "product_name": "Logical Volume", 00:09:08.746 "block_size": 4096, 00:09:08.746 "num_blocks": 38912, 00:09:08.746 "uuid": "91451f76-3bbb-4caf-9ad9-9c37dc836347", 00:09:08.746 "assigned_rate_limits": { 00:09:08.746 "rw_ios_per_sec": 0, 00:09:08.746 "rw_mbytes_per_sec": 0, 00:09:08.746 "r_mbytes_per_sec": 0, 00:09:08.746 "w_mbytes_per_sec": 0 00:09:08.746 }, 00:09:08.746 "claimed": false, 00:09:08.746 "zoned": false, 00:09:08.746 "supported_io_types": { 00:09:08.746 "read": true, 00:09:08.746 "write": true, 00:09:08.746 "unmap": true, 00:09:08.746 "flush": false, 00:09:08.746 "reset": true, 00:09:08.746 "nvme_admin": false, 00:09:08.746 "nvme_io": false, 00:09:08.746 "nvme_io_md": false, 00:09:08.746 "write_zeroes": true, 00:09:08.746 "zcopy": false, 00:09:08.746 "get_zone_info": false, 00:09:08.746 "zone_management": false, 00:09:08.746 "zone_append": false, 00:09:08.746 "compare": false, 00:09:08.746 "compare_and_write": false, 00:09:08.746 "abort": false, 00:09:08.746 "seek_hole": true, 00:09:08.746 "seek_data": true, 00:09:08.746 "copy": false, 00:09:08.746 "nvme_iov_md": false 00:09:08.746 }, 00:09:08.746 "driver_specific": { 00:09:08.746 "lvol": { 00:09:08.746 "lvol_store_uuid": "b0c19470-6c01-4176-9529-8f91ff6a1648", 00:09:08.746 "base_bdev": "aio_bdev", 00:09:08.746 "thin_provision": false, 00:09:08.746 "num_allocated_clusters": 38, 00:09:08.746 "snapshot": false, 00:09:08.746 "clone": false, 00:09:08.746 "esnap_clone": false 00:09:08.746 } 00:09:08.746 } 00:09:08.746 } 00:09:08.746 ] 00:09:08.746 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:08.746 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:09:08.746 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:09.006 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:09.006 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:09:09.006 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:09.266 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:09.266 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:09.525 [2024-10-29 10:58:14.961994] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:09.525 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:09.525 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:09:10.093 request: 00:09:10.093 { 00:09:10.093 "uuid": "b0c19470-6c01-4176-9529-8f91ff6a1648", 00:09:10.093 "method": "bdev_lvol_get_lvstores", 00:09:10.093 "req_id": 1 00:09:10.093 } 00:09:10.093 Got JSON-RPC error response 00:09:10.093 response: 00:09:10.093 { 00:09:10.093 "code": -19, 00:09:10.093 "message": "No such device" 00:09:10.093 } 00:09:10.093 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:10.093 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.093 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.093 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.093 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:10.093 aio_bdev 00:09:10.368 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 91451f76-3bbb-4caf-9ad9-9c37dc836347 00:09:10.368 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=91451f76-3bbb-4caf-9ad9-9c37dc836347 00:09:10.368 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:10.368 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:09:10.368 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:10.368 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:10.368 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:10.636 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 91451f76-3bbb-4caf-9ad9-9c37dc836347 -t 2000 00:09:10.895 [ 00:09:10.895 { 00:09:10.895 "name": "91451f76-3bbb-4caf-9ad9-9c37dc836347", 00:09:10.895 "aliases": [ 00:09:10.895 "lvs/lvol" 00:09:10.895 ], 00:09:10.895 "product_name": "Logical Volume", 00:09:10.895 "block_size": 4096, 00:09:10.895 "num_blocks": 38912, 00:09:10.895 "uuid": "91451f76-3bbb-4caf-9ad9-9c37dc836347", 00:09:10.895 "assigned_rate_limits": { 00:09:10.895 "rw_ios_per_sec": 0, 00:09:10.895 "rw_mbytes_per_sec": 0, 00:09:10.895 "r_mbytes_per_sec": 0, 00:09:10.895 "w_mbytes_per_sec": 0 00:09:10.895 }, 00:09:10.895 "claimed": false, 00:09:10.895 "zoned": false, 00:09:10.895 "supported_io_types": { 00:09:10.895 "read": true, 00:09:10.895 "write": true, 00:09:10.895 "unmap": true, 00:09:10.895 "flush": false, 00:09:10.895 "reset": true, 00:09:10.895 "nvme_admin": false, 00:09:10.895 "nvme_io": false, 00:09:10.895 "nvme_io_md": false, 00:09:10.895 "write_zeroes": true, 00:09:10.895 "zcopy": false, 00:09:10.895 "get_zone_info": false, 00:09:10.895 "zone_management": false, 00:09:10.895 "zone_append": false, 00:09:10.895 "compare": false, 00:09:10.895 "compare_and_write": false, 00:09:10.895 "abort": false, 00:09:10.896 "seek_hole": true, 00:09:10.896 "seek_data": true, 00:09:10.896 "copy": false, 00:09:10.896 "nvme_iov_md": false 00:09:10.896 }, 00:09:10.896 "driver_specific": { 00:09:10.896 "lvol": { 00:09:10.896 "lvol_store_uuid": "b0c19470-6c01-4176-9529-8f91ff6a1648", 00:09:10.896 "base_bdev": "aio_bdev", 00:09:10.896 "thin_provision": false, 00:09:10.896 "num_allocated_clusters": 38, 00:09:10.896 "snapshot": false, 00:09:10.896 "clone": false, 00:09:10.896 "esnap_clone": false 00:09:10.896 } 00:09:10.896 } 00:09:10.896 } 00:09:10.896 ] 00:09:10.896 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:09:10.896 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:10.896 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:09:11.155 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:11.155 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:09:11.155 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:11.414 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:11.414 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 91451f76-3bbb-4caf-9ad9-9c37dc836347 00:09:11.982 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0c19470-6c01-4176-9529-8f91ff6a1648 00:09:12.241 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:12.501 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:12.759 ************************************ 00:09:12.759 END TEST lvs_grow_dirty 00:09:12.759 ************************************ 00:09:12.759 00:09:12.759 real 0m20.445s 00:09:12.759 user 0m40.414s 00:09:12.759 sys 0m9.327s 00:09:12.759 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:12.759 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:13.018 nvmf_trace.0 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.018 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.953 rmmod nvme_tcp 00:09:13.953 rmmod nvme_fabrics 00:09:13.953 rmmod nvme_keyring 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 76603 ']' 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 76603 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 76603 ']' 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 76603 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76603 00:09:13.953 killing process with pid 76603 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76603' 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 76603 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 76603 00:09:13.953 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:13.954 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:14.214 00:09:14.214 real 0m41.040s 00:09:14.214 user 1m4.356s 00:09:14.214 sys 0m13.172s 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.214 ************************************ 00:09:14.214 END TEST nvmf_lvs_grow 00:09:14.214 ************************************ 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.214 ************************************ 00:09:14.214 START TEST nvmf_bdev_io_wait 00:09:14.214 ************************************ 00:09:14.214 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:14.474 * Looking for test storage... 00:09:14.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.474 --rc genhtml_branch_coverage=1 00:09:14.474 --rc genhtml_function_coverage=1 00:09:14.474 --rc genhtml_legend=1 00:09:14.474 --rc geninfo_all_blocks=1 00:09:14.474 --rc geninfo_unexecuted_blocks=1 00:09:14.474 00:09:14.474 ' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.474 --rc genhtml_branch_coverage=1 00:09:14.474 --rc genhtml_function_coverage=1 00:09:14.474 --rc genhtml_legend=1 00:09:14.474 --rc geninfo_all_blocks=1 00:09:14.474 --rc geninfo_unexecuted_blocks=1 00:09:14.474 00:09:14.474 ' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.474 --rc genhtml_branch_coverage=1 00:09:14.474 --rc genhtml_function_coverage=1 00:09:14.474 --rc genhtml_legend=1 00:09:14.474 --rc geninfo_all_blocks=1 00:09:14.474 --rc geninfo_unexecuted_blocks=1 00:09:14.474 00:09:14.474 ' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:14.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.474 --rc genhtml_branch_coverage=1 00:09:14.474 --rc genhtml_function_coverage=1 00:09:14.474 --rc genhtml_legend=1 00:09:14.474 --rc geninfo_all_blocks=1 00:09:14.474 --rc geninfo_unexecuted_blocks=1 00:09:14.474 00:09:14.474 ' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.474 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:14.474 Cannot find device "nvmf_init_br" 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:14.474 Cannot find device "nvmf_init_br2" 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:14.474 Cannot find device "nvmf_tgt_br" 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.474 Cannot find device "nvmf_tgt_br2" 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:14.474 Cannot find device "nvmf_init_br" 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:14.474 Cannot find device "nvmf_init_br2" 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:14.474 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:14.732 Cannot find device "nvmf_tgt_br" 00:09:14.732 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:14.732 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:14.732 Cannot find device "nvmf_tgt_br2" 00:09:14.732 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:14.732 10:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:14.732 Cannot find device "nvmf_br" 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:14.732 Cannot find device "nvmf_init_if" 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:14.732 Cannot find device "nvmf_init_if2" 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.732 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:14.732 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:14.733 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:14.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:14.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:09:14.991 00:09:14.991 --- 10.0.0.3 ping statistics --- 00:09:14.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.991 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:14.991 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:14.991 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:09:14.991 00:09:14.991 --- 10.0.0.4 ping statistics --- 00:09:14.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.991 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:14.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:09:14.991 00:09:14.991 --- 10.0.0.1 ping statistics --- 00:09:14.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.991 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:14.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:09:14.991 00:09:14.991 --- 10.0.0.2 ping statistics --- 00:09:14.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.991 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=76979 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 76979 00:09:14.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 76979 ']' 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:14.991 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.991 [2024-10-29 10:58:20.374103] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:14.991 [2024-10-29 10:58:20.374876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.250 [2024-10-29 10:58:20.529919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.250 [2024-10-29 10:58:20.556886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.250 [2024-10-29 10:58:20.557141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.250 [2024-10-29 10:58:20.557299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.250 [2024-10-29 10:58:20.557555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.250 [2024-10-29 10:58:20.557739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.250 [2024-10-29 10:58:20.558723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.250 [2024-10-29 10:58:20.558871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.250 [2024-10-29 10:58:20.559057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.250 [2024-10-29 10:58:20.559065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.251 [2024-10-29 10:58:20.725741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.251 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.251 [2024-10-29 10:58:20.736940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.510 Malloc0 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.510 [2024-10-29 10:58:20.790758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=77012 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=77014 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:15.510 { 00:09:15.510 "params": { 00:09:15.510 "name": "Nvme$subsystem", 00:09:15.510 "trtype": "$TEST_TRANSPORT", 00:09:15.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.510 "adrfam": "ipv4", 00:09:15.510 "trsvcid": "$NVMF_PORT", 00:09:15.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.510 "hdgst": ${hdgst:-false}, 00:09:15.510 "ddgst": ${ddgst:-false} 00:09:15.510 }, 00:09:15.510 "method": "bdev_nvme_attach_controller" 00:09:15.510 } 00:09:15.510 EOF 00:09:15.510 )") 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=77016 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=77018 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:15.510 { 00:09:15.510 "params": { 00:09:15.510 "name": "Nvme$subsystem", 00:09:15.510 "trtype": "$TEST_TRANSPORT", 00:09:15.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.510 "adrfam": "ipv4", 00:09:15.510 "trsvcid": "$NVMF_PORT", 00:09:15.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.510 "hdgst": ${hdgst:-false}, 00:09:15.510 "ddgst": ${ddgst:-false} 00:09:15.510 }, 00:09:15.510 "method": "bdev_nvme_attach_controller" 00:09:15.510 } 00:09:15.510 EOF 00:09:15.510 )") 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:15.510 { 00:09:15.510 "params": { 00:09:15.510 "name": "Nvme$subsystem", 00:09:15.510 "trtype": "$TEST_TRANSPORT", 00:09:15.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.510 "adrfam": "ipv4", 00:09:15.510 "trsvcid": "$NVMF_PORT", 00:09:15.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.510 "hdgst": ${hdgst:-false}, 00:09:15.510 "ddgst": ${ddgst:-false} 00:09:15.510 }, 00:09:15.510 "method": "bdev_nvme_attach_controller" 00:09:15.510 } 00:09:15.510 EOF 00:09:15.510 )") 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:15.510 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:15.510 { 00:09:15.510 "params": { 00:09:15.510 "name": "Nvme$subsystem", 00:09:15.510 "trtype": "$TEST_TRANSPORT", 00:09:15.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:15.511 "adrfam": "ipv4", 00:09:15.511 "trsvcid": "$NVMF_PORT", 00:09:15.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:15.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:15.511 "hdgst": ${hdgst:-false}, 00:09:15.511 "ddgst": ${ddgst:-false} 00:09:15.511 }, 00:09:15.511 "method": "bdev_nvme_attach_controller" 00:09:15.511 } 00:09:15.511 EOF 00:09:15.511 )") 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:15.511 "params": { 00:09:15.511 "name": "Nvme1", 00:09:15.511 "trtype": "tcp", 00:09:15.511 "traddr": "10.0.0.3", 00:09:15.511 "adrfam": "ipv4", 00:09:15.511 "trsvcid": "4420", 00:09:15.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.511 "hdgst": false, 00:09:15.511 "ddgst": false 00:09:15.511 }, 00:09:15.511 "method": "bdev_nvme_attach_controller" 00:09:15.511 }' 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:15.511 "params": { 00:09:15.511 "name": "Nvme1", 00:09:15.511 "trtype": "tcp", 00:09:15.511 "traddr": "10.0.0.3", 00:09:15.511 "adrfam": "ipv4", 00:09:15.511 "trsvcid": "4420", 00:09:15.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.511 "hdgst": false, 00:09:15.511 "ddgst": false 00:09:15.511 }, 00:09:15.511 "method": "bdev_nvme_attach_controller" 00:09:15.511 }' 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:15.511 "params": { 00:09:15.511 "name": "Nvme1", 00:09:15.511 "trtype": "tcp", 00:09:15.511 "traddr": "10.0.0.3", 00:09:15.511 "adrfam": "ipv4", 00:09:15.511 "trsvcid": "4420", 00:09:15.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.511 "hdgst": false, 00:09:15.511 "ddgst": false 00:09:15.511 }, 00:09:15.511 "method": "bdev_nvme_attach_controller" 00:09:15.511 }' 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:15.511 "params": { 00:09:15.511 "name": "Nvme1", 00:09:15.511 "trtype": "tcp", 00:09:15.511 "traddr": "10.0.0.3", 00:09:15.511 "adrfam": "ipv4", 00:09:15.511 "trsvcid": "4420", 00:09:15.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:15.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:15.511 "hdgst": false, 00:09:15.511 "ddgst": false 00:09:15.511 }, 00:09:15.511 "method": "bdev_nvme_attach_controller" 00:09:15.511 }' 00:09:15.511 10:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 77012 00:09:15.511 [2024-10-29 10:58:20.860458] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:15.511 [2024-10-29 10:58:20.860687] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:15.511 [2024-10-29 10:58:20.861959] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:15.511 [2024-10-29 10:58:20.862196] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:15.511 [2024-10-29 10:58:20.883042] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:15.511 [2024-10-29 10:58:20.883116] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:15.511 [2024-10-29 10:58:20.888569] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:15.511 [2024-10-29 10:58:20.888650] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:15.770 [2024-10-29 10:58:21.057330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.770 [2024-10-29 10:58:21.073656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:15.770 [2024-10-29 10:58:21.087502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.770 [2024-10-29 10:58:21.103690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.770 [2024-10-29 10:58:21.119927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:15.770 [2024-10-29 10:58:21.133912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.770 [2024-10-29 10:58:21.148469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.770 [2024-10-29 10:58:21.164226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:15.770 [2024-10-29 10:58:21.178047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.770 Running I/O for 1 seconds... 00:09:15.770 [2024-10-29 10:58:21.200317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.770 [2024-10-29 10:58:21.215909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:15.770 [2024-10-29 10:58:21.229640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.770 Running I/O for 1 seconds... 00:09:16.029 Running I/O for 1 seconds... 00:09:16.029 Running I/O for 1 seconds... 00:09:16.965 168344.00 IOPS, 657.59 MiB/s 00:09:16.965 Latency(us) 00:09:16.965 [2024-10-29T10:58:22.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.965 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:16.965 Nvme1n1 : 1.00 168021.50 656.33 0.00 0.00 757.96 389.12 1921.40 00:09:16.965 [2024-10-29T10:58:22.462Z] =================================================================================================================== 00:09:16.965 [2024-10-29T10:58:22.462Z] Total : 168021.50 656.33 0.00 0.00 757.96 389.12 1921.40 00:09:16.965 9477.00 IOPS, 37.02 MiB/s 00:09:16.965 Latency(us) 00:09:16.965 [2024-10-29T10:58:22.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.965 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:16.965 Nvme1n1 : 1.01 9518.79 37.18 0.00 0.00 13381.75 7923.90 21448.15 00:09:16.965 [2024-10-29T10:58:22.462Z] =================================================================================================================== 00:09:16.965 [2024-10-29T10:58:22.462Z] Total : 9518.79 37.18 0.00 0.00 13381.75 7923.90 21448.15 00:09:16.965 7346.00 IOPS, 28.70 MiB/s 00:09:16.965 Latency(us) 00:09:16.965 [2024-10-29T10:58:22.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.965 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:16.965 Nvme1n1 : 1.01 7412.00 28.95 0.00 0.00 17179.78 5957.82 27644.28 00:09:16.965 [2024-10-29T10:58:22.462Z] =================================================================================================================== 00:09:16.965 [2024-10-29T10:58:22.462Z] Total : 7412.00 28.95 0.00 0.00 17179.78 5957.82 27644.28 00:09:16.965 8140.00 IOPS, 31.80 MiB/s 00:09:16.965 Latency(us) 00:09:16.965 [2024-10-29T10:58:22.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.965 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:16.965 Nvme1n1 : 1.01 8208.03 32.06 0.00 0.00 15527.58 6315.29 25499.46 00:09:16.965 [2024-10-29T10:58:22.462Z] =================================================================================================================== 00:09:16.965 [2024-10-29T10:58:22.462Z] Total : 8208.03 32.06 0.00 0.00 15527.58 6315.29 25499.46 00:09:16.965 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 77014 00:09:16.965 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 77016 00:09:16.965 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 77018 00:09:16.965 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.965 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.965 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.231 rmmod nvme_tcp 00:09:17.231 rmmod nvme_fabrics 00:09:17.231 rmmod nvme_keyring 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 76979 ']' 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 76979 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 76979 ']' 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 76979 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76979 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:17.231 killing process with pid 76979 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76979' 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 76979 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 76979 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:17.231 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:17.512 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:17.512 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:17.512 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:17.512 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:17.512 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:17.512 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:17.512 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:17.512 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:17.512 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:17.513 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:17.513 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:17.513 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:17.513 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:17.513 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.513 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.513 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.513 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:17.513 00:09:17.513 real 0m3.327s 00:09:17.513 user 0m12.661s 00:09:17.513 sys 0m2.149s 00:09:17.513 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.513 10:58:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.513 ************************************ 00:09:17.513 END TEST nvmf_bdev_io_wait 00:09:17.513 ************************************ 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.773 ************************************ 00:09:17.773 START TEST nvmf_queue_depth 00:09:17.773 ************************************ 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:17.773 * Looking for test storage... 00:09:17.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:17.773 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:17.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.774 --rc genhtml_branch_coverage=1 00:09:17.774 --rc genhtml_function_coverage=1 00:09:17.774 --rc genhtml_legend=1 00:09:17.774 --rc geninfo_all_blocks=1 00:09:17.774 --rc geninfo_unexecuted_blocks=1 00:09:17.774 00:09:17.774 ' 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:17.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.774 --rc genhtml_branch_coverage=1 00:09:17.774 --rc genhtml_function_coverage=1 00:09:17.774 --rc genhtml_legend=1 00:09:17.774 --rc geninfo_all_blocks=1 00:09:17.774 --rc geninfo_unexecuted_blocks=1 00:09:17.774 00:09:17.774 ' 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:17.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.774 --rc genhtml_branch_coverage=1 00:09:17.774 --rc genhtml_function_coverage=1 00:09:17.774 --rc genhtml_legend=1 00:09:17.774 --rc geninfo_all_blocks=1 00:09:17.774 --rc geninfo_unexecuted_blocks=1 00:09:17.774 00:09:17.774 ' 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:17.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.774 --rc genhtml_branch_coverage=1 00:09:17.774 --rc genhtml_function_coverage=1 00:09:17.774 --rc genhtml_legend=1 00:09:17.774 --rc geninfo_all_blocks=1 00:09:17.774 --rc geninfo_unexecuted_blocks=1 00:09:17.774 00:09:17.774 ' 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.774 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:17.774 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:17.775 Cannot find device "nvmf_init_br" 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:17.775 Cannot find device "nvmf_init_br2" 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:17.775 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:18.034 Cannot find device "nvmf_tgt_br" 00:09:18.034 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:18.034 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:18.034 Cannot find device "nvmf_tgt_br2" 00:09:18.034 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:18.034 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:18.034 Cannot find device "nvmf_init_br" 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:18.035 Cannot find device "nvmf_init_br2" 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:18.035 Cannot find device "nvmf_tgt_br" 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:18.035 Cannot find device "nvmf_tgt_br2" 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:18.035 Cannot find device "nvmf_br" 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:18.035 Cannot find device "nvmf_init_if" 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:18.035 Cannot find device "nvmf_init_if2" 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:18.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:18.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:18.035 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:18.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:18.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:09:18.295 00:09:18.295 --- 10.0.0.3 ping statistics --- 00:09:18.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.295 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:18.295 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:18.295 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:09:18.295 00:09:18.295 --- 10.0.0.4 ping statistics --- 00:09:18.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.295 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:18.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:18.295 00:09:18.295 --- 10.0.0.1 ping statistics --- 00:09:18.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.295 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:18.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:09:18.295 00:09:18.295 --- 10.0.0.2 ping statistics --- 00:09:18.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.295 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=77278 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 77278 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 77278 ']' 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:18.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:18.295 10:58:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:18.295 [2024-10-29 10:58:23.695172] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:18.295 [2024-10-29 10:58:23.695265] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.554 [2024-10-29 10:58:23.846655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.554 [2024-10-29 10:58:23.866871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.554 [2024-10-29 10:58:23.866931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.554 [2024-10-29 10:58:23.866940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.554 [2024-10-29 10:58:23.866947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.554 [2024-10-29 10:58:23.866953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.554 [2024-10-29 10:58:23.867213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.554 [2024-10-29 10:58:23.896251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.491 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:19.491 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 [2024-10-29 10:58:24.674283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 Malloc0 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 [2024-10-29 10:58:24.721714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=77310 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 77310 /var/tmp/bdevperf.sock 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 77310 ']' 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:19.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:19.492 10:58:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.492 [2024-10-29 10:58:24.786164] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:19.492 [2024-10-29 10:58:24.786258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77310 ] 00:09:19.492 [2024-10-29 10:58:24.941476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.492 [2024-10-29 10:58:24.965062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.751 [2024-10-29 10:58:24.998624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:19.751 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:19.751 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:09:19.751 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:19.751 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.751 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.751 NVMe0n1 00:09:19.751 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.751 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:19.751 Running I/O for 10 seconds... 00:09:22.063 6552.00 IOPS, 25.59 MiB/s [2024-10-29T10:58:28.495Z] 7421.00 IOPS, 28.99 MiB/s [2024-10-29T10:58:29.431Z] 7864.33 IOPS, 30.72 MiB/s [2024-10-29T10:58:30.364Z] 8207.75 IOPS, 32.06 MiB/s [2024-10-29T10:58:31.298Z] 8415.00 IOPS, 32.87 MiB/s [2024-10-29T10:58:32.271Z] 8557.83 IOPS, 33.43 MiB/s [2024-10-29T10:58:33.647Z] 8633.43 IOPS, 33.72 MiB/s [2024-10-29T10:58:34.584Z] 8691.12 IOPS, 33.95 MiB/s [2024-10-29T10:58:35.522Z] 8794.56 IOPS, 34.35 MiB/s [2024-10-29T10:58:35.522Z] 8925.10 IOPS, 34.86 MiB/s 00:09:30.025 Latency(us) 00:09:30.025 [2024-10-29T10:58:35.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.025 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:30.025 Verification LBA range: start 0x0 length 0x4000 00:09:30.025 NVMe0n1 : 10.09 8949.03 34.96 0.00 0.00 113917.93 27882.59 95801.72 00:09:30.025 [2024-10-29T10:58:35.522Z] =================================================================================================================== 00:09:30.025 [2024-10-29T10:58:35.522Z] Total : 8949.03 34.96 0.00 0.00 113917.93 27882.59 95801.72 00:09:30.025 { 00:09:30.025 "results": [ 00:09:30.025 { 00:09:30.025 "job": "NVMe0n1", 00:09:30.025 "core_mask": "0x1", 00:09:30.025 "workload": "verify", 00:09:30.025 "status": "finished", 00:09:30.025 "verify_range": { 00:09:30.025 "start": 0, 00:09:30.025 "length": 16384 00:09:30.025 }, 00:09:30.025 "queue_depth": 1024, 00:09:30.025 "io_size": 4096, 00:09:30.025 "runtime": 10.087689, 00:09:30.025 "iops": 8949.026878207686, 00:09:30.025 "mibps": 34.95713624299877, 00:09:30.025 "io_failed": 0, 00:09:30.025 "io_timeout": 0, 00:09:30.025 "avg_latency_us": 113917.93112866241, 00:09:30.025 "min_latency_us": 27882.589090909092, 00:09:30.025 "max_latency_us": 95801.71636363637 00:09:30.025 } 00:09:30.025 ], 00:09:30.025 "core_count": 1 00:09:30.025 } 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 77310 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 77310 ']' 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 77310 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77310 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:30.025 killing process with pid 77310 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77310' 00:09:30.025 Received shutdown signal, test time was about 10.000000 seconds 00:09:30.025 00:09:30.025 Latency(us) 00:09:30.025 [2024-10-29T10:58:35.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.025 [2024-10-29T10:58:35.522Z] =================================================================================================================== 00:09:30.025 [2024-10-29T10:58:35.522Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 77310 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 77310 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.025 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.285 rmmod nvme_tcp 00:09:30.285 rmmod nvme_fabrics 00:09:30.285 rmmod nvme_keyring 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 77278 ']' 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 77278 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 77278 ']' 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 77278 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77278 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:30.285 killing process with pid 77278 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77278' 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 77278 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 77278 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:30.285 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.544 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.544 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:30.544 00:09:30.544 real 0m12.983s 00:09:30.544 user 0m21.956s 00:09:30.544 sys 0m2.075s 00:09:30.544 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.544 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.544 ************************************ 00:09:30.544 END TEST nvmf_queue_depth 00:09:30.544 ************************************ 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.805 ************************************ 00:09:30.805 START TEST nvmf_target_multipath 00:09:30.805 ************************************ 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:30.805 * Looking for test storage... 00:09:30.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:30.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.805 --rc genhtml_branch_coverage=1 00:09:30.805 --rc genhtml_function_coverage=1 00:09:30.805 --rc genhtml_legend=1 00:09:30.805 --rc geninfo_all_blocks=1 00:09:30.805 --rc geninfo_unexecuted_blocks=1 00:09:30.805 00:09:30.805 ' 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:30.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.805 --rc genhtml_branch_coverage=1 00:09:30.805 --rc genhtml_function_coverage=1 00:09:30.805 --rc genhtml_legend=1 00:09:30.805 --rc geninfo_all_blocks=1 00:09:30.805 --rc geninfo_unexecuted_blocks=1 00:09:30.805 00:09:30.805 ' 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:30.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.805 --rc genhtml_branch_coverage=1 00:09:30.805 --rc genhtml_function_coverage=1 00:09:30.805 --rc genhtml_legend=1 00:09:30.805 --rc geninfo_all_blocks=1 00:09:30.805 --rc geninfo_unexecuted_blocks=1 00:09:30.805 00:09:30.805 ' 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:30.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.805 --rc genhtml_branch_coverage=1 00:09:30.805 --rc genhtml_function_coverage=1 00:09:30.805 --rc genhtml_legend=1 00:09:30.805 --rc geninfo_all_blocks=1 00:09:30.805 --rc geninfo_unexecuted_blocks=1 00:09:30.805 00:09:30.805 ' 00:09:30.805 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.806 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:30.806 Cannot find device "nvmf_init_br" 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:30.806 Cannot find device "nvmf_init_br2" 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:30.806 Cannot find device "nvmf_tgt_br" 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:30.806 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.066 Cannot find device "nvmf_tgt_br2" 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:31.066 Cannot find device "nvmf_init_br" 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:31.066 Cannot find device "nvmf_init_br2" 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:31.066 Cannot find device "nvmf_tgt_br" 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:31.066 Cannot find device "nvmf_tgt_br2" 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:31.066 Cannot find device "nvmf_br" 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:31.066 Cannot find device "nvmf_init_if" 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:31.066 Cannot find device "nvmf_init_if2" 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.066 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:31.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:09:31.326 00:09:31.326 --- 10.0.0.3 ping statistics --- 00:09:31.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.326 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:31.326 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:31.326 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:09:31.326 00:09:31.326 --- 10.0.0.4 ping statistics --- 00:09:31.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.326 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:31.326 00:09:31.326 --- 10.0.0.1 ping statistics --- 00:09:31.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.326 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:31.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:09:31.326 00:09:31.326 --- 10.0.0.2 ping statistics --- 00:09:31.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.326 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.326 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.327 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.327 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=77669 00:09:31.327 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.327 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 77669 00:09:31.327 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 77669 ']' 00:09:31.327 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.327 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.327 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.327 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.327 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.327 [2024-10-29 10:58:36.764457] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:31.327 [2024-10-29 10:58:36.764544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.587 [2024-10-29 10:58:36.920201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.587 [2024-10-29 10:58:36.945782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.587 [2024-10-29 10:58:36.945842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.587 [2024-10-29 10:58:36.945855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.587 [2024-10-29 10:58:36.945866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.587 [2024-10-29 10:58:36.945874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.587 [2024-10-29 10:58:36.946748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.587 [2024-10-29 10:58:36.947489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.587 [2024-10-29 10:58:36.947670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.587 [2024-10-29 10:58:36.947675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.587 [2024-10-29 10:58:36.981491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.587 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:31.587 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:09:31.587 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.587 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.587 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.587 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.587 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.154 [2024-10-29 10:58:37.344831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.154 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:32.412 Malloc0 00:09:32.412 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:32.670 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.928 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:32.928 [2024-10-29 10:58:38.400544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:32.928 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:33.494 [2024-10-29 10:58:38.704744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:33.494 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:33.494 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:33.751 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.751 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:09:33.751 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.751 10:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:33.751 10:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=77757 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:35.657 10:58:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:35.657 [global] 00:09:35.657 thread=1 00:09:35.657 invalidate=1 00:09:35.657 rw=randrw 00:09:35.657 time_based=1 00:09:35.657 runtime=6 00:09:35.657 ioengine=libaio 00:09:35.657 direct=1 00:09:35.657 bs=4096 00:09:35.657 iodepth=128 00:09:35.657 norandommap=0 00:09:35.657 numjobs=1 00:09:35.657 00:09:35.657 verify_dump=1 00:09:35.657 verify_backlog=512 00:09:35.657 verify_state_save=0 00:09:35.657 do_verify=1 00:09:35.657 verify=crc32c-intel 00:09:35.657 [job0] 00:09:35.657 filename=/dev/nvme0n1 00:09:35.657 Could not set queue depth (nvme0n1) 00:09:35.922 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.922 fio-3.35 00:09:35.922 Starting 1 thread 00:09:36.858 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:37.117 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:37.375 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:37.633 10:58:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:37.892 10:58:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 77757 00:09:42.075 00:09:42.075 job0: (groupid=0, jobs=1): err= 0: pid=77778: Tue Oct 29 10:58:47 2024 00:09:42.075 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(242MiB/6006msec) 00:09:42.075 slat (usec): min=2, max=7985, avg=56.95, stdev=236.68 00:09:42.075 clat (usec): min=1459, max=19818, avg=8455.95, stdev=1536.25 00:09:42.075 lat (usec): min=1482, max=19836, avg=8512.90, stdev=1541.52 00:09:42.075 clat percentiles (usec): 00:09:42.075 | 1.00th=[ 4359], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 7635], 00:09:42.075 | 30.00th=[ 7898], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8586], 00:09:42.075 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9896], 95.00th=[11994], 00:09:42.075 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13960], 99.95th=[14091], 00:09:42.075 | 99.99th=[19792] 00:09:42.075 bw ( KiB/s): min= 9704, max=27280, per=51.46%, avg=21254.55, stdev=4911.03, samples=11 00:09:42.075 iops : min= 2426, max= 6820, avg=5313.64, stdev=1227.76, samples=11 00:09:42.075 write: IOPS=5911, BW=23.1MiB/s (24.2MB/s)(126MiB/5469msec); 0 zone resets 00:09:42.075 slat (usec): min=4, max=2612, avg=66.62, stdev=165.87 00:09:42.075 clat (usec): min=1193, max=18825, avg=7347.07, stdev=1371.46 00:09:42.075 lat (usec): min=1214, max=19670, avg=7413.69, stdev=1376.82 00:09:42.075 clat percentiles (usec): 00:09:42.075 | 1.00th=[ 3261], 5.00th=[ 4228], 10.00th=[ 5604], 20.00th=[ 6849], 00:09:42.075 | 30.00th=[ 7111], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7767], 00:09:42.075 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8717], 00:09:42.075 | 99.00th=[11338], 99.50th=[11863], 99.90th=[13435], 99.95th=[14615], 00:09:42.075 | 99.99th=[17957] 00:09:42.075 bw ( KiB/s): min=10248, max=27008, per=89.99%, avg=21278.36, stdev=4616.20, samples=11 00:09:42.075 iops : min= 2562, max= 6752, avg=5319.55, stdev=1154.10, samples=11 00:09:42.075 lat (msec) : 2=0.04%, 4=1.77%, 10=91.47%, 20=6.72% 00:09:42.075 cpu : usr=5.41%, sys=20.60%, ctx=5401, majf=0, minf=66 00:09:42.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:42.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.075 issued rwts: total=62009,32330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.075 00:09:42.075 Run status group 0 (all jobs): 00:09:42.075 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=242MiB (254MB), run=6006-6006msec 00:09:42.075 WRITE: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=126MiB (132MB), run=5469-5469msec 00:09:42.075 00:09:42.075 Disk stats (read/write): 00:09:42.075 nvme0n1: ios=61131/31757, merge=0/0, ticks=495107/219102, in_queue=714209, util=98.73% 00:09:42.075 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:42.333 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=77855 00:09:42.590 10:58:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:42.590 [global] 00:09:42.590 thread=1 00:09:42.590 invalidate=1 00:09:42.590 rw=randrw 00:09:42.590 time_based=1 00:09:42.590 runtime=6 00:09:42.590 ioengine=libaio 00:09:42.590 direct=1 00:09:42.590 bs=4096 00:09:42.590 iodepth=128 00:09:42.590 norandommap=0 00:09:42.590 numjobs=1 00:09:42.590 00:09:42.590 verify_dump=1 00:09:42.591 verify_backlog=512 00:09:42.591 verify_state_save=0 00:09:42.591 do_verify=1 00:09:42.591 verify=crc32c-intel 00:09:42.591 [job0] 00:09:42.591 filename=/dev/nvme0n1 00:09:42.591 Could not set queue depth (nvme0n1) 00:09:42.591 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:42.591 fio-3.35 00:09:42.591 Starting 1 thread 00:09:43.526 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:43.785 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:44.043 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:44.307 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:44.566 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 77855 00:09:48.749 00:09:48.749 job0: (groupid=0, jobs=1): err= 0: pid=77882: Tue Oct 29 10:58:54 2024 00:09:48.749 read: IOPS=11.4k, BW=44.5MiB/s (46.6MB/s)(267MiB/6007msec) 00:09:48.749 slat (usec): min=4, max=5603, avg=43.65, stdev=191.44 00:09:48.749 clat (usec): min=1025, max=14745, avg=7684.61, stdev=1965.46 00:09:48.749 lat (usec): min=1034, max=14754, avg=7728.26, stdev=1980.72 00:09:48.749 clat percentiles (usec): 00:09:48.749 | 1.00th=[ 2966], 5.00th=[ 3752], 10.00th=[ 4752], 20.00th=[ 6128], 00:09:48.749 | 30.00th=[ 7242], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8291], 00:09:48.749 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10421], 00:09:48.749 | 99.00th=[12911], 99.50th=[13304], 99.90th=[13960], 99.95th=[14222], 00:09:48.749 | 99.99th=[14484] 00:09:48.749 bw ( KiB/s): min=11016, max=37024, per=52.78%, avg=24036.00, stdev=8006.43, samples=12 00:09:48.749 iops : min= 2754, max= 9256, avg=6009.00, stdev=2001.61, samples=12 00:09:48.749 write: IOPS=6758, BW=26.4MiB/s (27.7MB/s)(141MiB/5343msec); 0 zone resets 00:09:48.749 slat (usec): min=11, max=1708, avg=54.73, stdev=139.13 00:09:48.749 clat (usec): min=1683, max=14229, avg=6560.51, stdev=1773.63 00:09:48.749 lat (usec): min=1711, max=14252, avg=6615.24, stdev=1788.95 00:09:48.749 clat percentiles (usec): 00:09:48.749 | 1.00th=[ 2671], 5.00th=[ 3392], 10.00th=[ 3884], 20.00th=[ 4621], 00:09:48.749 | 30.00th=[ 5473], 40.00th=[ 6783], 50.00th=[ 7177], 60.00th=[ 7504], 00:09:48.749 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:09:48.749 | 99.00th=[10552], 99.50th=[11469], 99.90th=[12649], 99.95th=[13173], 00:09:48.749 | 99.99th=[13435] 00:09:48.749 bw ( KiB/s): min=11552, max=37880, per=88.92%, avg=24040.00, stdev=7825.25, samples=12 00:09:48.749 iops : min= 2888, max= 9470, avg=6010.00, stdev=1956.31, samples=12 00:09:48.749 lat (msec) : 2=0.09%, 4=7.80%, 10=88.08%, 20=4.03% 00:09:48.749 cpu : usr=5.91%, sys=21.99%, ctx=6044, majf=0, minf=102 00:09:48.749 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:48.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.749 issued rwts: total=68384,36113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.749 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.749 00:09:48.749 Run status group 0 (all jobs): 00:09:48.749 READ: bw=44.5MiB/s (46.6MB/s), 44.5MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=267MiB (280MB), run=6007-6007msec 00:09:48.749 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=141MiB (148MB), run=5343-5343msec 00:09:48.749 00:09:48.749 Disk stats (read/write): 00:09:48.749 nvme0n1: ios=67512/35509, merge=0/0, ticks=497216/218138, in_queue=715354, util=98.62% 00:09:48.749 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:49.007 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.007 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:09:49.007 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:49.007 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.007 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:49.007 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.007 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:09:49.007 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:49.266 rmmod nvme_tcp 00:09:49.266 rmmod nvme_fabrics 00:09:49.266 rmmod nvme_keyring 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 77669 ']' 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 77669 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 77669 ']' 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 77669 00:09:49.266 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:09:49.267 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:49.267 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77669 00:09:49.267 killing process with pid 77669 00:09:49.267 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:49.267 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:49.267 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77669' 00:09:49.267 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 77669 00:09:49.267 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 77669 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:49.525 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:49.526 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:49.526 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:49.526 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:49.784 ************************************ 00:09:49.784 END TEST nvmf_target_multipath 00:09:49.784 ************************************ 00:09:49.784 00:09:49.784 real 0m19.033s 00:09:49.784 user 1m10.206s 00:09:49.784 sys 0m10.055s 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.784 ************************************ 00:09:49.784 START TEST nvmf_zcopy 00:09:49.784 ************************************ 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:49.784 * Looking for test storage... 00:09:49.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:49.784 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:50.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.044 --rc genhtml_branch_coverage=1 00:09:50.044 --rc genhtml_function_coverage=1 00:09:50.044 --rc genhtml_legend=1 00:09:50.044 --rc geninfo_all_blocks=1 00:09:50.044 --rc geninfo_unexecuted_blocks=1 00:09:50.044 00:09:50.044 ' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:50.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.044 --rc genhtml_branch_coverage=1 00:09:50.044 --rc genhtml_function_coverage=1 00:09:50.044 --rc genhtml_legend=1 00:09:50.044 --rc geninfo_all_blocks=1 00:09:50.044 --rc geninfo_unexecuted_blocks=1 00:09:50.044 00:09:50.044 ' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:50.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.044 --rc genhtml_branch_coverage=1 00:09:50.044 --rc genhtml_function_coverage=1 00:09:50.044 --rc genhtml_legend=1 00:09:50.044 --rc geninfo_all_blocks=1 00:09:50.044 --rc geninfo_unexecuted_blocks=1 00:09:50.044 00:09:50.044 ' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:50.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.044 --rc genhtml_branch_coverage=1 00:09:50.044 --rc genhtml_function_coverage=1 00:09:50.044 --rc genhtml_legend=1 00:09:50.044 --rc geninfo_all_blocks=1 00:09:50.044 --rc geninfo_unexecuted_blocks=1 00:09:50.044 00:09:50.044 ' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.044 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.044 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:50.045 Cannot find device "nvmf_init_br" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:50.045 Cannot find device "nvmf_init_br2" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:50.045 Cannot find device "nvmf_tgt_br" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.045 Cannot find device "nvmf_tgt_br2" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:50.045 Cannot find device "nvmf_init_br" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:50.045 Cannot find device "nvmf_init_br2" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:50.045 Cannot find device "nvmf_tgt_br" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:50.045 Cannot find device "nvmf_tgt_br2" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:50.045 Cannot find device "nvmf_br" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:50.045 Cannot find device "nvmf_init_if" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:50.045 Cannot find device "nvmf_init_if2" 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:50.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:50.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:50.045 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:50.304 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:50.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:50.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:09:50.305 00:09:50.305 --- 10.0.0.3 ping statistics --- 00:09:50.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.305 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:50.305 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:50.305 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:09:50.305 00:09:50.305 --- 10.0.0.4 ping statistics --- 00:09:50.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.305 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:50.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:50.305 00:09:50.305 --- 10.0.0.1 ping statistics --- 00:09:50.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.305 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:50.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:09:50.305 00:09:50.305 --- 10.0.0.2 ping statistics --- 00:09:50.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.305 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=78183 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 78183 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 78183 ']' 00:09:50.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:50.305 10:58:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.563 [2024-10-29 10:58:55.846535] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:50.563 [2024-10-29 10:58:55.846817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.563 [2024-10-29 10:58:55.990344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.563 [2024-10-29 10:58:56.009784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.563 [2024-10-29 10:58:56.009836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.563 [2024-10-29 10:58:56.009863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.563 [2024-10-29 10:58:56.009870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.563 [2024-10-29 10:58:56.009876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.563 [2024-10-29 10:58:56.010145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.563 [2024-10-29 10:58:56.039160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.822 [2024-10-29 10:58:56.131014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.822 [2024-10-29 10:58:56.147206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.822 malloc0 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.822 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.822 { 00:09:50.822 "params": { 00:09:50.822 "name": "Nvme$subsystem", 00:09:50.822 "trtype": "$TEST_TRANSPORT", 00:09:50.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.822 "adrfam": "ipv4", 00:09:50.822 "trsvcid": "$NVMF_PORT", 00:09:50.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.823 "hdgst": ${hdgst:-false}, 00:09:50.823 "ddgst": ${ddgst:-false} 00:09:50.823 }, 00:09:50.823 "method": "bdev_nvme_attach_controller" 00:09:50.823 } 00:09:50.823 EOF 00:09:50.823 )") 00:09:50.823 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:50.823 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:50.823 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:50.823 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.823 "params": { 00:09:50.823 "name": "Nvme1", 00:09:50.823 "trtype": "tcp", 00:09:50.823 "traddr": "10.0.0.3", 00:09:50.823 "adrfam": "ipv4", 00:09:50.823 "trsvcid": "4420", 00:09:50.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.823 "hdgst": false, 00:09:50.823 "ddgst": false 00:09:50.823 }, 00:09:50.823 "method": "bdev_nvme_attach_controller" 00:09:50.823 }' 00:09:50.823 [2024-10-29 10:58:56.230527] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:09:50.823 [2024-10-29 10:58:56.230616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78209 ] 00:09:51.082 [2024-10-29 10:58:56.376305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.082 [2024-10-29 10:58:56.396232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.082 [2024-10-29 10:58:56.432697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:51.082 Running I/O for 10 seconds... 00:09:53.395 6138.00 IOPS, 47.95 MiB/s [2024-10-29T10:58:59.827Z] 6133.00 IOPS, 47.91 MiB/s [2024-10-29T10:59:00.760Z] 6219.33 IOPS, 48.59 MiB/s [2024-10-29T10:59:01.694Z] 6214.25 IOPS, 48.55 MiB/s [2024-10-29T10:59:02.630Z] 6210.40 IOPS, 48.52 MiB/s [2024-10-29T10:59:03.565Z] 6280.50 IOPS, 49.07 MiB/s [2024-10-29T10:59:04.940Z] 6328.86 IOPS, 49.44 MiB/s [2024-10-29T10:59:05.896Z] 6339.12 IOPS, 49.52 MiB/s [2024-10-29T10:59:06.829Z] 6315.11 IOPS, 49.34 MiB/s [2024-10-29T10:59:06.829Z] 6334.40 IOPS, 49.49 MiB/s 00:10:01.332 Latency(us) 00:10:01.332 [2024-10-29T10:59:06.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.332 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:01.332 Verification LBA range: start 0x0 length 0x1000 00:10:01.332 Nvme1n1 : 10.01 6337.87 49.51 0.00 0.00 20135.03 2740.60 31933.91 00:10:01.332 [2024-10-29T10:59:06.829Z] =================================================================================================================== 00:10:01.332 [2024-10-29T10:59:06.829Z] Total : 6337.87 49.51 0.00 0.00 20135.03 2740.60 31933.91 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=78326 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:01.332 { 00:10:01.332 "params": { 00:10:01.332 "name": "Nvme$subsystem", 00:10:01.332 "trtype": "$TEST_TRANSPORT", 00:10:01.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.332 "adrfam": "ipv4", 00:10:01.332 "trsvcid": "$NVMF_PORT", 00:10:01.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.332 "hdgst": ${hdgst:-false}, 00:10:01.332 "ddgst": ${ddgst:-false} 00:10:01.332 }, 00:10:01.332 "method": "bdev_nvme_attach_controller" 00:10:01.332 } 00:10:01.332 EOF 00:10:01.332 )") 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:01.332 [2024-10-29 10:59:06.667732] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.332 [2024-10-29 10:59:06.667773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:01.332 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:01.332 "params": { 00:10:01.332 "name": "Nvme1", 00:10:01.332 "trtype": "tcp", 00:10:01.332 "traddr": "10.0.0.3", 00:10:01.332 "adrfam": "ipv4", 00:10:01.332 "trsvcid": "4420", 00:10:01.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.332 "hdgst": false, 00:10:01.332 "ddgst": false 00:10:01.332 }, 00:10:01.332 "method": "bdev_nvme_attach_controller" 00:10:01.332 }' 00:10:01.332 [2024-10-29 10:59:06.679691] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.332 [2024-10-29 10:59:06.679871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.332 [2024-10-29 10:59:06.691708] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.332 [2024-10-29 10:59:06.691739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.332 [2024-10-29 10:59:06.703703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.332 [2024-10-29 10:59:06.703732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.332 [2024-10-29 10:59:06.707188] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:10:01.332 [2024-10-29 10:59:06.707266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78326 ] 00:10:01.332 [2024-10-29 10:59:06.715702] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.332 [2024-10-29 10:59:06.715899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.332 [2024-10-29 10:59:06.727707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.332 [2024-10-29 10:59:06.727917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-10-29 10:59:06.735722] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-10-29 10:59:06.735876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-10-29 10:59:06.747726] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-10-29 10:59:06.747877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-10-29 10:59:06.759743] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-10-29 10:59:06.759882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-10-29 10:59:06.771778] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-10-29 10:59:06.771915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-10-29 10:59:06.783763] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-10-29 10:59:06.783942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-10-29 10:59:06.795752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-10-29 10:59:06.795786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-10-29 10:59:06.807752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-10-29 10:59:06.807782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.333 [2024-10-29 10:59:06.819753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.333 [2024-10-29 10:59:06.819981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.590 [2024-10-29 10:59:06.831769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.590 [2024-10-29 10:59:06.831805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.590 [2024-10-29 10:59:06.843766] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.590 [2024-10-29 10:59:06.843971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.590 [2024-10-29 10:59:06.849941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.590 [2024-10-29 10:59:06.855778] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.590 [2024-10-29 10:59:06.855814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.590 [2024-10-29 10:59:06.867773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.590 [2024-10-29 10:59:06.867805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.590 [2024-10-29 10:59:06.869114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.590 [2024-10-29 10:59:06.879772] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.590 [2024-10-29 10:59:06.879800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:06.891802] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:06.892141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:06.903800] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:06.903838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:06.905293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.591 [2024-10-29 10:59:06.915802] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:06.915841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:06.927777] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:06.927804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:06.939808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:06.940044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:06.951813] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:06.951845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:06.963822] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:06.963854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:06.975838] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:06.975871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:06.987836] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:06.987866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:06.999845] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:06.999879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 Running I/O for 5 seconds... 00:10:01.591 [2024-10-29 10:59:07.011850] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:07.012049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:07.027852] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:07.027889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:07.043136] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:07.043335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:07.061334] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:07.061369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.591 [2024-10-29 10:59:07.077882] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.591 [2024-10-29 10:59:07.078080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.093547] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.093583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.110912] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.110947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.125109] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.125143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.141782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.141832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.157153] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.157186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.168339] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.168557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.184993] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.185026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.201468] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.201494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.217566] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.217601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.234325] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.234361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.250229] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.849 [2024-10-29 10:59:07.250263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.849 [2024-10-29 10:59:07.261343] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.850 [2024-10-29 10:59:07.261405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.850 [2024-10-29 10:59:07.277840] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.850 [2024-10-29 10:59:07.277873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.850 [2024-10-29 10:59:07.293288] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.850 [2024-10-29 10:59:07.293323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.850 [2024-10-29 10:59:07.302667] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.850 [2024-10-29 10:59:07.302702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.850 [2024-10-29 10:59:07.317498] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.850 [2024-10-29 10:59:07.317531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.850 [2024-10-29 10:59:07.332327] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.850 [2024-10-29 10:59:07.332542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.348858] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.348896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.363754] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.363911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.379576] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.379622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.397201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.397236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.411837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.411875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.427712] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.427749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.445323] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.445357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.462201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.462234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.477031] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.477068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.493014] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.493051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.509336] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.509399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.527129] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.527298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.541171] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.541206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.557290] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.557324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.574344] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.574574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.109 [2024-10-29 10:59:07.591168] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.109 [2024-10-29 10:59:07.591331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.608746] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.608930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.625005] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.625169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.641755] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.641949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.657681] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.657862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.675912] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.676139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.690532] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.690700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.706991] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.707154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.723192] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.723356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.739806] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.740007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.757681] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.757861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.773641] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.773809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.792231] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.792410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.807530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.807698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.824033] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.824205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.840925] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.841094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.368 [2024-10-29 10:59:07.858559] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.368 [2024-10-29 10:59:07.858750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:07.874207] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:07.874418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:07.883987] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:07.884164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:07.898367] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:07.898584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:07.913866] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:07.914030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:07.932491] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:07.932653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:07.946136] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:07.946171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:07.961226] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:07.961259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:07.970120] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:07.970153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:07.985125] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:07.985158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:08.000225] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:08.000258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 12237.00 IOPS, 95.60 MiB/s [2024-10-29T10:59:08.124Z] [2024-10-29 10:59:08.009650] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:08.009681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:08.025451] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:08.025483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:08.034741] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.627 [2024-10-29 10:59:08.034924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.627 [2024-10-29 10:59:08.050572] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.628 [2024-10-29 10:59:08.050606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.628 [2024-10-29 10:59:08.065325] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.628 [2024-10-29 10:59:08.065359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.628 [2024-10-29 10:59:08.081141] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.628 [2024-10-29 10:59:08.081176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.628 [2024-10-29 10:59:08.097813] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.628 [2024-10-29 10:59:08.097979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.628 [2024-10-29 10:59:08.114674] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.628 [2024-10-29 10:59:08.114709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.130616] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.130653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.147459] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.147492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.162065] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.162100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.178058] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.178095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.196785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.196958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.212667] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.212702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.229239] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.229276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.247444] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.247509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.262536] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.262569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.273915] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.273947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.290318] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.290351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.305707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.305741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.317279] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.317313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.333163] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.333198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.349739] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.349787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.365155] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.365189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.887 [2024-10-29 10:59:08.374074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.887 [2024-10-29 10:59:08.374106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.390231] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.390268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.408346] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.408567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.422943] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.422976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.440414] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.440586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.450836] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.450870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.465849] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.465885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.481675] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.481709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.490757] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.490804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.506543] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.506577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.521919] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.521953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.540227] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.540435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.554806] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.554840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.570434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.570463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.587658] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.587833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.602808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.602975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.618158] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.618324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.146 [2024-10-29 10:59:08.634178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.146 [2024-10-29 10:59:08.634213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.651787] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.651828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.667796] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.667833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.685906] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.685969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.700176] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.700238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.717147] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.717473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.731685] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.731744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.746948] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.747006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.765622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.765663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.779752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.780095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.796414] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.796458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.812512] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.812545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.831512] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.831563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.846574] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.846612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.857303] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.857485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.873369] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.873543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.405 [2024-10-29 10:59:08.890013] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.405 [2024-10-29 10:59:08.890064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:08.905603] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:08.905659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:08.916059] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:08.916097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:08.930742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:08.930777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:08.947571] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:08.947644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:08.965689] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:08.965725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:08.980640] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:08.980674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:08.997620] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:08.997652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 12167.50 IOPS, 95.06 MiB/s [2024-10-29T10:59:09.161Z] [2024-10-29 10:59:09.013942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:09.013974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:09.030516] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:09.030549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:09.046864] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:09.046895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:09.063374] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:09.063434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:09.079059] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:09.079092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:09.097520] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:09.097552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:09.111173] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:09.111211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:09.126366] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:09.126450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:09.143800] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:09.143859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.664 [2024-10-29 10:59:09.158994] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.664 [2024-10-29 10:59:09.159054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.176257] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.176310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.192349] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.192419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.209421] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.209485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.224246] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.224476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.239815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.240007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.249935] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.249969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.264713] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.264764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.274867] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.275035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.289224] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.289259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.304770] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.304805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.322996] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.323031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.337678] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.337864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.347434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.347469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.363018] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.363052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.380528] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.380565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.396808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.396843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.923 [2024-10-29 10:59:09.415069] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.923 [2024-10-29 10:59:09.415103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.181 [2024-10-29 10:59:09.430171] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.430238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.439521] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.439556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.456070] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.456106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.473193] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.473229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.491592] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.491638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.506887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.507069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.523600] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.523646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.540297] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.540333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.557047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.557081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.573402] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.573467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.591074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.591108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.606717] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.606752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.616362] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.616427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.631249] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.631290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.649177] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.649354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.182 [2024-10-29 10:59:09.664087] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.182 [2024-10-29 10:59:09.664256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.681422] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.681606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.696902] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.697072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.706262] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.706460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.721087] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.721251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.736998] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.737246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.754986] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.755159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.770156] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.770329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.787228] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.787403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.803075] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.803226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.812280] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.812480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.827952] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.828130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.845051] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.845201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.861259] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.861432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.878174] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.878358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.896966] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.897126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.911523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.911698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.443 [2024-10-29 10:59:09.926106] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.443 [2024-10-29 10:59:09.926276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:09.943202] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:09.943358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:09.959150] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:09.959329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:09.975801] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:09.975994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:09.993893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:09.994064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 12069.00 IOPS, 94.29 MiB/s [2024-10-29T10:59:10.197Z] [2024-10-29 10:59:10.009298] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.009493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.025276] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.025455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.042630] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.042817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.059265] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.059465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.077177] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.077345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.091893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.092107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.108869] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.109037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.123216] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.123424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.140400] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.140588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.155867] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.155908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.171048] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.171082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.180373] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.180447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.700 [2024-10-29 10:59:10.196582] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.700 [2024-10-29 10:59:10.196632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.213611] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.213648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.230364] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.230443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.246146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.246196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.255837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.256021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.271603] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.271666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.281217] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.281251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.296399] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.296465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.306366] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.306446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.321963] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.321999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.337983] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.338016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.355078] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.355274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.371361] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.371427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.389527] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.389561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.404360] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.404439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.420189] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.420225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.438677] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.438712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.958 [2024-10-29 10:59:10.452753] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.958 [2024-10-29 10:59:10.452956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.215 [2024-10-29 10:59:10.468594] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.215 [2024-10-29 10:59:10.468628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.215 [2024-10-29 10:59:10.478544] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.215 [2024-10-29 10:59:10.478576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.215 [2024-10-29 10:59:10.491180] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.215 [2024-10-29 10:59:10.491214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.215 [2024-10-29 10:59:10.507310] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.215 [2024-10-29 10:59:10.507345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.215 [2024-10-29 10:59:10.524942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.215 [2024-10-29 10:59:10.524977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.215 [2024-10-29 10:59:10.540192] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.215 [2024-10-29 10:59:10.540365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.215 [2024-10-29 10:59:10.557223] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.215 [2024-10-29 10:59:10.557258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.216 [2024-10-29 10:59:10.572581] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.216 [2024-10-29 10:59:10.572619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.216 [2024-10-29 10:59:10.590969] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.216 [2024-10-29 10:59:10.591005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.216 [2024-10-29 10:59:10.604889] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.216 [2024-10-29 10:59:10.604925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.216 [2024-10-29 10:59:10.620710] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.216 [2024-10-29 10:59:10.620761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.216 [2024-10-29 10:59:10.639561] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.216 [2024-10-29 10:59:10.639594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.216 [2024-10-29 10:59:10.654627] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.216 [2024-10-29 10:59:10.654829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.216 [2024-10-29 10:59:10.672610] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.216 [2024-10-29 10:59:10.672646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.216 [2024-10-29 10:59:10.688349] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.216 [2024-10-29 10:59:10.688419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.216 [2024-10-29 10:59:10.707059] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.216 [2024-10-29 10:59:10.707096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.721622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.721833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.737894] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.737928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.755926] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.755992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.771481] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.771515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.780691] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.780875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.796194] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.796363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.806530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.806564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.820573] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.820630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.836354] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.836593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.852948] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.852982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.871045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.871079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.886331] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.886403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.903663] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.903831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.919119] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.919290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.929615] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.929654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.943754] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.943794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.953912] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.953947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-10-29 10:59:10.969851] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-10-29 10:59:10.969889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:10.984471] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:10.984515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.000415] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.000452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 11946.00 IOPS, 93.33 MiB/s [2024-10-29T10:59:11.228Z] [2024-10-29 10:59:11.016108] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.016280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.026322] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.026514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.041467] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.041649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.057364] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.057574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.067464] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.067661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.081950] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.082114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.092079] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.092242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.106916] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.107081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.118953] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.119119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.136770] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.136954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.151290] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.151487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.167306] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.167548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.183978] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.184196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.200563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.200739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.731 [2024-10-29 10:59:11.218169] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.731 [2024-10-29 10:59:11.218326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.989 [2024-10-29 10:59:11.234346] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.989 [2024-10-29 10:59:11.234589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.989 [2024-10-29 10:59:11.250662] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.250820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.266977] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.267128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.284585] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.284734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.301164] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.301198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.317313] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.317348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.334601] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.334635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.350954] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.350988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.367397] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.367459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.385237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.385272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.401453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.401487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.419375] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.419435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.435543] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.435577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.453245] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.453428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.469193] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.469347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.990 [2024-10-29 10:59:11.480641] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.990 [2024-10-29 10:59:11.480676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.496176] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.496334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.513261] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.513298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.529887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.530061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.545200] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.545353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.561737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.561771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.577897] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.577931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.596662] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.596696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.610746] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.610794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.626522] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.626558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.642339] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.642417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.660228] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.660405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.675683] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.675721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.693155] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.693192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.708226] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.708263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.725558] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.725593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.248 [2024-10-29 10:59:11.740090] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.248 [2024-10-29 10:59:11.740125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.756074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.756244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.765622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.765807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.780307] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.780342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.795387] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.795590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.805305] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.805339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.820728] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.820795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.836916] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.836975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.853547] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.853634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.869942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.870005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.888310] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.888625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.902687] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.902741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.919348] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.919433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.936097] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.936400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.952272] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.952322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.962284] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.962330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.978237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.978297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.507 [2024-10-29 10:59:11.995161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.507 [2024-10-29 10:59:11.995500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 11887.00 IOPS, 92.87 MiB/s [2024-10-29T10:59:12.263Z] [2024-10-29 10:59:12.010981] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.011186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.021996] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 00:10:06.766 Latency(us) 00:10:06.766 [2024-10-29T10:59:12.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:06.766 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:06.766 Nvme1n1 : 5.01 11884.80 92.85 0.00 0.00 10757.57 4349.21 18588.39 00:10:06.766 [2024-10-29T10:59:12.263Z] =================================================================================================================== 00:10:06.766 [2024-10-29T10:59:12.263Z] Total : 11884.80 92.85 0.00 0.00 10757.57 4349.21 18588.39 00:10:06.766 [2024-10-29 10:59:12.022297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.033975] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.034008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.046010] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.046055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.058060] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.058127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.070043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.070357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.082057] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.082310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.094029] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.094269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.106013] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.106195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.118042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.118328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.130009] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.130178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 [2024-10-29 10:59:12.142012] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.766 [2024-10-29 10:59:12.142160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.766 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (78326) - No such process 00:10:06.766 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 78326 00:10:06.766 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.766 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.766 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.766 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.766 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:06.766 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.766 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.766 delay0 00:10:06.766 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.767 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:06.767 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.767 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.767 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.767 10:59:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:07.025 [2024-10-29 10:59:12.355555] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:13.645 Initializing NVMe Controllers 00:10:13.645 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:13.645 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:13.645 Initialization complete. Launching workers. 00:10:13.645 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 94 00:10:13.645 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 381, failed to submit 33 00:10:13.645 success 267, unsuccessful 114, failed 0 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.645 rmmod nvme_tcp 00:10:13.645 rmmod nvme_fabrics 00:10:13.645 rmmod nvme_keyring 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 78183 ']' 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 78183 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 78183 ']' 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 78183 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78183 00:10:13.645 killing process with pid 78183 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78183' 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 78183 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 78183 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:13.645 00:10:13.645 real 0m23.755s 00:10:13.645 user 0m38.817s 00:10:13.645 sys 0m6.634s 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:13.645 ************************************ 00:10:13.645 END TEST nvmf_zcopy 00:10:13.645 ************************************ 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.645 ************************************ 00:10:13.645 START TEST nvmf_nmic 00:10:13.645 ************************************ 00:10:13.645 10:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:13.645 * Looking for test storage... 00:10:13.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:13.645 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:13.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.905 --rc genhtml_branch_coverage=1 00:10:13.905 --rc genhtml_function_coverage=1 00:10:13.905 --rc genhtml_legend=1 00:10:13.905 --rc geninfo_all_blocks=1 00:10:13.905 --rc geninfo_unexecuted_blocks=1 00:10:13.905 00:10:13.905 ' 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:13.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.905 --rc genhtml_branch_coverage=1 00:10:13.905 --rc genhtml_function_coverage=1 00:10:13.905 --rc genhtml_legend=1 00:10:13.905 --rc geninfo_all_blocks=1 00:10:13.905 --rc geninfo_unexecuted_blocks=1 00:10:13.905 00:10:13.905 ' 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:13.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.905 --rc genhtml_branch_coverage=1 00:10:13.905 --rc genhtml_function_coverage=1 00:10:13.905 --rc genhtml_legend=1 00:10:13.905 --rc geninfo_all_blocks=1 00:10:13.905 --rc geninfo_unexecuted_blocks=1 00:10:13.905 00:10:13.905 ' 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:13.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.905 --rc genhtml_branch_coverage=1 00:10:13.905 --rc genhtml_function_coverage=1 00:10:13.905 --rc genhtml_legend=1 00:10:13.905 --rc geninfo_all_blocks=1 00:10:13.905 --rc geninfo_unexecuted_blocks=1 00:10:13.905 00:10:13.905 ' 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.905 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.906 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:13.906 Cannot find device "nvmf_init_br" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:13.906 Cannot find device "nvmf_init_br2" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:13.906 Cannot find device "nvmf_tgt_br" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.906 Cannot find device "nvmf_tgt_br2" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:13.906 Cannot find device "nvmf_init_br" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:13.906 Cannot find device "nvmf_init_br2" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:13.906 Cannot find device "nvmf_tgt_br" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:13.906 Cannot find device "nvmf_tgt_br2" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:13.906 Cannot find device "nvmf_br" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:13.906 Cannot find device "nvmf_init_if" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:13.906 Cannot find device "nvmf_init_if2" 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:13.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:13.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:13.906 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:14.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:14.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:10:14.166 00:10:14.166 --- 10.0.0.3 ping statistics --- 00:10:14.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.166 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:14.166 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:14.166 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:14.166 00:10:14.166 --- 10.0.0.4 ping statistics --- 00:10:14.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.166 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:14.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:14.166 00:10:14.166 --- 10.0.0.1 ping statistics --- 00:10:14.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.166 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:14.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:14.166 00:10:14.166 --- 10.0.0.2 ping statistics --- 00:10:14.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.166 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=78700 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 78700 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 78700 ']' 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:14.166 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.424 [2024-10-29 10:59:19.676562] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:10:14.424 [2024-10-29 10:59:19.676654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.424 [2024-10-29 10:59:19.832513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.424 [2024-10-29 10:59:19.857622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.424 [2024-10-29 10:59:19.857674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.424 [2024-10-29 10:59:19.857687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.424 [2024-10-29 10:59:19.857697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.424 [2024-10-29 10:59:19.857706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.424 [2024-10-29 10:59:19.858554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.424 [2024-10-29 10:59:19.859246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.424 [2024-10-29 10:59:19.859481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.424 [2024-10-29 10:59:19.859893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.424 [2024-10-29 10:59:19.913093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:14.683 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:14.683 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:10:14.683 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.683 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.683 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.683 [2024-10-29 10:59:20.018429] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.683 Malloc0 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.683 [2024-10-29 10:59:20.088579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:14.683 test case1: single bdev can't be used in multiple subsystems 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.683 [2024-10-29 10:59:20.124402] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:14.683 [2024-10-29 10:59:20.124618] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:14.683 [2024-10-29 10:59:20.124820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.683 request: 00:10:14.683 { 00:10:14.683 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:14.683 "namespace": { 00:10:14.683 "bdev_name": "Malloc0", 00:10:14.683 "no_auto_visible": false 00:10:14.683 }, 00:10:14.683 "method": "nvmf_subsystem_add_ns", 00:10:14.683 "req_id": 1 00:10:14.683 } 00:10:14.683 Got JSON-RPC error response 00:10:14.683 response: 00:10:14.683 { 00:10:14.683 "code": -32602, 00:10:14.683 "message": "Invalid parameters" 00:10:14.683 } 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:14.683 Adding namespace failed - expected result. 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:14.683 test case2: host connect to nvmf target in multiple paths 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.683 [2024-10-29 10:59:20.140558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.683 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:14.942 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:14.942 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.942 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:10:14.942 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.942 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:14.942 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:17.474 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:17.474 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:17.474 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.474 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:17.474 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.474 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:17.474 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:17.474 [global] 00:10:17.474 thread=1 00:10:17.474 invalidate=1 00:10:17.474 rw=write 00:10:17.474 time_based=1 00:10:17.474 runtime=1 00:10:17.474 ioengine=libaio 00:10:17.474 direct=1 00:10:17.474 bs=4096 00:10:17.474 iodepth=1 00:10:17.474 norandommap=0 00:10:17.474 numjobs=1 00:10:17.474 00:10:17.474 verify_dump=1 00:10:17.474 verify_backlog=512 00:10:17.474 verify_state_save=0 00:10:17.474 do_verify=1 00:10:17.474 verify=crc32c-intel 00:10:17.474 [job0] 00:10:17.474 filename=/dev/nvme0n1 00:10:17.474 Could not set queue depth (nvme0n1) 00:10:17.474 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.474 fio-3.35 00:10:17.474 Starting 1 thread 00:10:18.426 00:10:18.426 job0: (groupid=0, jobs=1): err= 0: pid=78784: Tue Oct 29 10:59:23 2024 00:10:18.426 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:18.426 slat (nsec): min=10801, max=58665, avg=13539.18, stdev=4119.86 00:10:18.426 clat (usec): min=129, max=538, avg=174.19, stdev=23.72 00:10:18.426 lat (usec): min=140, max=567, avg=187.73, stdev=24.36 00:10:18.426 clat percentiles (usec): 00:10:18.426 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:10:18.426 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:18.426 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 210], 00:10:18.426 | 99.00th=[ 233], 99.50th=[ 285], 99.90th=[ 400], 99.95th=[ 519], 00:10:18.426 | 99.99th=[ 537] 00:10:18.426 write: IOPS=3232, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec); 0 zone resets 00:10:18.426 slat (usec): min=14, max=112, avg=20.75, stdev= 6.29 00:10:18.426 clat (usec): min=77, max=403, avg=107.14, stdev=16.27 00:10:18.426 lat (usec): min=94, max=420, avg=127.89, stdev=18.21 00:10:18.426 clat percentiles (usec): 00:10:18.426 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 95], 00:10:18.426 | 30.00th=[ 98], 40.00th=[ 101], 50.00th=[ 104], 60.00th=[ 108], 00:10:18.426 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 129], 95.00th=[ 137], 00:10:18.426 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 215], 00:10:18.426 | 99.99th=[ 404] 00:10:18.426 bw ( KiB/s): min=12664, max=12664, per=97.93%, avg=12664.00, stdev= 0.00, samples=1 00:10:18.426 iops : min= 3166, max= 3166, avg=3166.00, stdev= 0.00, samples=1 00:10:18.426 lat (usec) : 100=19.18%, 250=80.42%, 500=0.36%, 750=0.03% 00:10:18.426 cpu : usr=1.80%, sys=8.90%, ctx=6308, majf=0, minf=5 00:10:18.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.426 issued rwts: total=3072,3236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.426 00:10:18.426 Run status group 0 (all jobs): 00:10:18.426 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:18.426 WRITE: bw=12.6MiB/s (13.2MB/s), 12.6MiB/s-12.6MiB/s (13.2MB/s-13.2MB/s), io=12.6MiB (13.3MB), run=1001-1001msec 00:10:18.426 00:10:18.426 Disk stats (read/write): 00:10:18.426 nvme0n1: ios=2674/3072, merge=0/0, ticks=489/355, in_queue=844, util=91.27% 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.426 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.426 rmmod nvme_tcp 00:10:18.427 rmmod nvme_fabrics 00:10:18.427 rmmod nvme_keyring 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 78700 ']' 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 78700 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 78700 ']' 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 78700 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78700 00:10:18.696 killing process with pid 78700 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78700' 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 78700 00:10:18.696 10:59:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 78700 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:18.696 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:18.955 00:10:18.955 real 0m5.379s 00:10:18.955 user 0m15.727s 00:10:18.955 sys 0m2.321s 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:18.955 ************************************ 00:10:18.955 END TEST nvmf_nmic 00:10:18.955 ************************************ 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.955 ************************************ 00:10:18.955 START TEST nvmf_fio_target 00:10:18.955 ************************************ 00:10:18.955 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:19.215 * Looking for test storage... 00:10:19.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:19.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.215 --rc genhtml_branch_coverage=1 00:10:19.215 --rc genhtml_function_coverage=1 00:10:19.215 --rc genhtml_legend=1 00:10:19.215 --rc geninfo_all_blocks=1 00:10:19.215 --rc geninfo_unexecuted_blocks=1 00:10:19.215 00:10:19.215 ' 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:19.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.215 --rc genhtml_branch_coverage=1 00:10:19.215 --rc genhtml_function_coverage=1 00:10:19.215 --rc genhtml_legend=1 00:10:19.215 --rc geninfo_all_blocks=1 00:10:19.215 --rc geninfo_unexecuted_blocks=1 00:10:19.215 00:10:19.215 ' 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:19.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.215 --rc genhtml_branch_coverage=1 00:10:19.215 --rc genhtml_function_coverage=1 00:10:19.215 --rc genhtml_legend=1 00:10:19.215 --rc geninfo_all_blocks=1 00:10:19.215 --rc geninfo_unexecuted_blocks=1 00:10:19.215 00:10:19.215 ' 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:19.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.215 --rc genhtml_branch_coverage=1 00:10:19.215 --rc genhtml_function_coverage=1 00:10:19.215 --rc genhtml_legend=1 00:10:19.215 --rc geninfo_all_blocks=1 00:10:19.215 --rc geninfo_unexecuted_blocks=1 00:10:19.215 00:10:19.215 ' 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.215 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.216 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:19.216 Cannot find device "nvmf_init_br" 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:19.216 Cannot find device "nvmf_init_br2" 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:19.216 Cannot find device "nvmf_tgt_br" 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.216 Cannot find device "nvmf_tgt_br2" 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:19.216 Cannot find device "nvmf_init_br" 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:19.216 Cannot find device "nvmf_init_br2" 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:19.216 Cannot find device "nvmf_tgt_br" 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:19.216 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:19.216 Cannot find device "nvmf_tgt_br2" 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:19.476 Cannot find device "nvmf_br" 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:19.476 Cannot find device "nvmf_init_if" 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:19.476 Cannot find device "nvmf_init_if2" 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:19.476 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:19.736 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:19.736 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:19.736 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:19.736 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:19.736 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:19.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:19.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:10:19.736 00:10:19.736 --- 10.0.0.3 ping statistics --- 00:10:19.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.736 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:19.736 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:19.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:19.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:10:19.736 00:10:19.736 --- 10.0.0.4 ping statistics --- 00:10:19.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.736 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:19.736 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:19.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:19.736 00:10:19.736 --- 10.0.0.1 ping statistics --- 00:10:19.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.736 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:19.736 10:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:19.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:10:19.736 00:10:19.736 --- 10.0.0.2 ping statistics --- 00:10:19.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.736 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:19.736 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:19.737 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.737 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=79018 00:10:19.737 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:19.737 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 79018 00:10:19.737 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 79018 ']' 00:10:19.737 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.737 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:19.737 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.737 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:19.737 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.737 [2024-10-29 10:59:25.096807] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:10:19.737 [2024-10-29 10:59:25.096900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.997 [2024-10-29 10:59:25.252746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.997 [2024-10-29 10:59:25.276848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.997 [2024-10-29 10:59:25.276908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.997 [2024-10-29 10:59:25.276922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.997 [2024-10-29 10:59:25.276932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.997 [2024-10-29 10:59:25.276940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.997 [2024-10-29 10:59:25.277819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.997 [2024-10-29 10:59:25.278517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.997 [2024-10-29 10:59:25.278728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.997 [2024-10-29 10:59:25.278818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.997 [2024-10-29 10:59:25.311862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:19.997 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:19.997 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:19.997 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.997 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:19.997 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.997 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.997 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:20.255 [2024-10-29 10:59:25.687356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.255 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.514 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:20.514 10:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.082 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:21.082 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.082 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:21.082 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.648 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:21.648 10:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:21.648 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.905 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:21.905 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.471 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:22.471 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.730 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:22.730 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:22.730 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:22.988 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:22.988 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.246 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:23.246 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:23.504 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:23.762 [2024-10-29 10:59:29.212291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:23.762 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:24.020 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:24.278 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:24.536 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:24.536 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:24.536 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.536 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:24.536 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:24.536 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:26.439 10:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:26.439 10:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:26.439 10:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.439 10:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:26.439 10:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.439 10:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:26.440 10:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:26.440 [global] 00:10:26.440 thread=1 00:10:26.440 invalidate=1 00:10:26.440 rw=write 00:10:26.440 time_based=1 00:10:26.440 runtime=1 00:10:26.440 ioengine=libaio 00:10:26.440 direct=1 00:10:26.440 bs=4096 00:10:26.440 iodepth=1 00:10:26.440 norandommap=0 00:10:26.440 numjobs=1 00:10:26.440 00:10:26.440 verify_dump=1 00:10:26.440 verify_backlog=512 00:10:26.440 verify_state_save=0 00:10:26.440 do_verify=1 00:10:26.440 verify=crc32c-intel 00:10:26.440 [job0] 00:10:26.440 filename=/dev/nvme0n1 00:10:26.440 [job1] 00:10:26.440 filename=/dev/nvme0n2 00:10:26.440 [job2] 00:10:26.440 filename=/dev/nvme0n3 00:10:26.440 [job3] 00:10:26.440 filename=/dev/nvme0n4 00:10:26.699 Could not set queue depth (nvme0n1) 00:10:26.699 Could not set queue depth (nvme0n2) 00:10:26.699 Could not set queue depth (nvme0n3) 00:10:26.699 Could not set queue depth (nvme0n4) 00:10:26.699 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.699 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.699 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.699 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.699 fio-3.35 00:10:26.699 Starting 4 threads 00:10:28.075 00:10:28.075 job0: (groupid=0, jobs=1): err= 0: pid=79200: Tue Oct 29 10:59:33 2024 00:10:28.075 read: IOPS=1635, BW=6541KiB/s (6698kB/s)(6548KiB/1001msec) 00:10:28.075 slat (nsec): min=10975, max=45789, avg=14408.74, stdev=2889.16 00:10:28.075 clat (usec): min=147, max=686, avg=302.12, stdev=46.82 00:10:28.075 lat (usec): min=160, max=697, avg=316.53, stdev=46.92 00:10:28.075 clat percentiles (usec): 00:10:28.075 | 1.00th=[ 243], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:10:28.075 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:10:28.075 | 70.00th=[ 306], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 383], 00:10:28.075 | 99.00th=[ 453], 99.50th=[ 494], 99.90th=[ 619], 99.95th=[ 685], 00:10:28.075 | 99.99th=[ 685] 00:10:28.075 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:28.075 slat (usec): min=16, max=687, avg=25.33, stdev=18.33 00:10:28.075 clat (usec): min=17, max=1775, avg=206.90, stdev=83.79 00:10:28.075 lat (usec): min=125, max=1798, avg=232.23, stdev=88.62 00:10:28.075 clat percentiles (usec): 00:10:28.075 | 1.00th=[ 111], 5.00th=[ 119], 10.00th=[ 125], 20.00th=[ 135], 00:10:28.075 | 30.00th=[ 176], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:10:28.075 | 70.00th=[ 212], 80.00th=[ 233], 90.00th=[ 330], 95.00th=[ 375], 00:10:28.075 | 99.00th=[ 412], 99.50th=[ 457], 99.90th=[ 611], 99.95th=[ 627], 00:10:28.075 | 99.99th=[ 1778] 00:10:28.075 bw ( KiB/s): min= 8192, max= 8192, per=19.95%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.075 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.075 lat (usec) : 20=0.03%, 250=46.57%, 500=53.03%, 750=0.35% 00:10:28.075 lat (msec) : 2=0.03% 00:10:28.075 cpu : usr=1.10%, sys=6.20%, ctx=3699, majf=0, minf=13 00:10:28.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.075 issued rwts: total=1637,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.075 job1: (groupid=0, jobs=1): err= 0: pid=79201: Tue Oct 29 10:59:33 2024 00:10:28.075 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:28.075 slat (nsec): min=10907, max=30514, avg=13086.83, stdev=2057.42 00:10:28.075 clat (usec): min=130, max=209, avg=162.39, stdev=12.58 00:10:28.075 lat (usec): min=142, max=224, avg=175.48, stdev=13.04 00:10:28.075 clat percentiles (usec): 00:10:28.075 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:10:28.075 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:10:28.075 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:10:28.075 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 208], 99.95th=[ 208], 00:10:28.075 | 99.99th=[ 210] 00:10:28.075 write: IOPS=3102, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec); 0 zone resets 00:10:28.075 slat (usec): min=14, max=140, avg=20.73, stdev= 5.02 00:10:28.075 clat (usec): min=93, max=1505, avg=124.71, stdev=27.96 00:10:28.075 lat (usec): min=111, max=1523, avg=145.45, stdev=28.73 00:10:28.075 clat percentiles (usec): 00:10:28.075 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 114], 00:10:28.075 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 127], 00:10:28.075 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:10:28.075 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 262], 00:10:28.075 | 99.99th=[ 1500] 00:10:28.075 bw ( KiB/s): min=12288, max=12288, per=29.93%, avg=12288.00, stdev= 0.00, samples=1 00:10:28.075 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:28.075 lat (usec) : 100=0.62%, 250=99.35%, 500=0.02% 00:10:28.075 lat (msec) : 2=0.02% 00:10:28.075 cpu : usr=2.30%, sys=8.30%, ctx=6178, majf=0, minf=12 00:10:28.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.075 issued rwts: total=3072,3106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.075 job2: (groupid=0, jobs=1): err= 0: pid=79203: Tue Oct 29 10:59:33 2024 00:10:28.075 read: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec) 00:10:28.075 slat (nsec): min=11009, max=50614, avg=13417.99, stdev=3696.55 00:10:28.075 clat (usec): min=148, max=251, avg=176.46, stdev=13.02 00:10:28.075 lat (usec): min=160, max=272, avg=189.88, stdev=14.27 00:10:28.075 clat percentiles (usec): 00:10:28.075 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:10:28.075 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:10:28.075 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:10:28.075 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 239], 99.95th=[ 245], 00:10:28.075 | 99.99th=[ 251] 00:10:28.075 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:28.075 slat (usec): min=14, max=156, avg=20.51, stdev= 5.10 00:10:28.075 clat (usec): min=103, max=2056, avg=134.72, stdev=39.07 00:10:28.075 lat (usec): min=120, max=2074, avg=155.23, stdev=39.69 00:10:28.075 clat percentiles (usec): 00:10:28.075 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 123], 00:10:28.075 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:10:28.075 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:10:28.075 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 293], 99.95th=[ 660], 00:10:28.075 | 99.99th=[ 2057] 00:10:28.075 bw ( KiB/s): min=12288, max=12288, per=29.93%, avg=12288.00, stdev= 0.00, samples=1 00:10:28.075 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:28.075 lat (usec) : 250=99.90%, 500=0.07%, 750=0.02% 00:10:28.075 lat (msec) : 4=0.02% 00:10:28.075 cpu : usr=1.70%, sys=8.10%, ctx=5783, majf=0, minf=11 00:10:28.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.076 issued rwts: total=2710,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.076 job3: (groupid=0, jobs=1): err= 0: pid=79204: Tue Oct 29 10:59:33 2024 00:10:28.076 read: IOPS=1816, BW=7265KiB/s (7439kB/s)(7272KiB/1001msec) 00:10:28.076 slat (nsec): min=10628, max=39166, avg=13669.69, stdev=2589.60 00:10:28.076 clat (usec): min=149, max=810, avg=309.38, stdev=59.55 00:10:28.076 lat (usec): min=160, max=824, avg=323.05, stdev=59.93 00:10:28.076 clat percentiles (usec): 00:10:28.076 | 1.00th=[ 243], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:10:28.076 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:10:28.076 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 383], 00:10:28.076 | 99.00th=[ 519], 99.50th=[ 668], 99.90th=[ 783], 99.95th=[ 807], 00:10:28.076 | 99.99th=[ 807] 00:10:28.076 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:28.076 slat (usec): min=16, max=161, avg=20.11, stdev= 5.38 00:10:28.076 clat (usec): min=101, max=501, avg=178.52, stdev=41.64 00:10:28.076 lat (usec): min=123, max=631, avg=198.63, stdev=42.46 00:10:28.076 clat percentiles (usec): 00:10:28.076 | 1.00th=[ 110], 5.00th=[ 119], 10.00th=[ 123], 20.00th=[ 131], 00:10:28.076 | 30.00th=[ 141], 40.00th=[ 182], 50.00th=[ 194], 60.00th=[ 200], 00:10:28.076 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 235], 00:10:28.076 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 453], 99.95th=[ 469], 00:10:28.076 | 99.99th=[ 502] 00:10:28.076 bw ( KiB/s): min= 8192, max= 8192, per=19.95%, avg=8192.00, stdev= 0.00, samples=1 00:10:28.076 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:28.076 lat (usec) : 250=53.60%, 500=45.78%, 750=0.52%, 1000=0.10% 00:10:28.076 cpu : usr=1.40%, sys=5.20%, ctx=3866, majf=0, minf=7 00:10:28.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.076 issued rwts: total=1818,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.076 00:10:28.076 Run status group 0 (all jobs): 00:10:28.076 READ: bw=36.0MiB/s (37.8MB/s), 6541KiB/s-12.0MiB/s (6698kB/s-12.6MB/s), io=36.1MiB (37.8MB), run=1001-1001msec 00:10:28.076 WRITE: bw=40.1MiB/s (42.0MB/s), 8184KiB/s-12.1MiB/s (8380kB/s-12.7MB/s), io=40.1MiB (42.1MB), run=1001-1001msec 00:10:28.076 00:10:28.076 Disk stats (read/write): 00:10:28.076 nvme0n1: ios=1586/1542, merge=0/0, ticks=500/342, in_queue=842, util=87.68% 00:10:28.076 nvme0n2: ios=2601/2769, merge=0/0, ticks=449/363, in_queue=812, util=88.16% 00:10:28.076 nvme0n3: ios=2397/2560, merge=0/0, ticks=441/370, in_queue=811, util=89.15% 00:10:28.076 nvme0n4: ios=1536/1884, merge=0/0, ticks=473/342, in_queue=815, util=89.80% 00:10:28.076 10:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:28.076 [global] 00:10:28.076 thread=1 00:10:28.076 invalidate=1 00:10:28.076 rw=randwrite 00:10:28.076 time_based=1 00:10:28.076 runtime=1 00:10:28.076 ioengine=libaio 00:10:28.076 direct=1 00:10:28.076 bs=4096 00:10:28.076 iodepth=1 00:10:28.076 norandommap=0 00:10:28.076 numjobs=1 00:10:28.076 00:10:28.076 verify_dump=1 00:10:28.076 verify_backlog=512 00:10:28.076 verify_state_save=0 00:10:28.076 do_verify=1 00:10:28.076 verify=crc32c-intel 00:10:28.076 [job0] 00:10:28.076 filename=/dev/nvme0n1 00:10:28.076 [job1] 00:10:28.076 filename=/dev/nvme0n2 00:10:28.076 [job2] 00:10:28.076 filename=/dev/nvme0n3 00:10:28.076 [job3] 00:10:28.076 filename=/dev/nvme0n4 00:10:28.076 Could not set queue depth (nvme0n1) 00:10:28.076 Could not set queue depth (nvme0n2) 00:10:28.076 Could not set queue depth (nvme0n3) 00:10:28.076 Could not set queue depth (nvme0n4) 00:10:28.076 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.076 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.076 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.076 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.076 fio-3.35 00:10:28.076 Starting 4 threads 00:10:29.455 00:10:29.455 job0: (groupid=0, jobs=1): err= 0: pid=79257: Tue Oct 29 10:59:34 2024 00:10:29.455 read: IOPS=1947, BW=7788KiB/s (7975kB/s)(7796KiB/1001msec) 00:10:29.455 slat (nsec): min=10048, max=40173, avg=12451.94, stdev=3110.99 00:10:29.455 clat (usec): min=132, max=1471, avg=274.10, stdev=49.50 00:10:29.455 lat (usec): min=143, max=1483, avg=286.55, stdev=49.68 00:10:29.455 clat percentiles (usec): 00:10:29.455 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 255], 00:10:29.455 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:10:29.455 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 310], 00:10:29.455 | 99.00th=[ 396], 99.50th=[ 545], 99.90th=[ 1057], 99.95th=[ 1467], 00:10:29.455 | 99.99th=[ 1467] 00:10:29.455 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:29.455 slat (nsec): min=15889, max=61589, avg=18998.66, stdev=4256.63 00:10:29.455 clat (usec): min=101, max=521, avg=193.75, stdev=22.39 00:10:29.455 lat (usec): min=120, max=542, avg=212.75, stdev=22.93 00:10:29.455 clat percentiles (usec): 00:10:29.455 | 1.00th=[ 135], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:10:29.455 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 198], 00:10:29.455 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 223], 00:10:29.455 | 99.00th=[ 249], 99.50th=[ 297], 99.90th=[ 392], 99.95th=[ 465], 00:10:29.455 | 99.99th=[ 523] 00:10:29.455 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:29.455 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:29.455 lat (usec) : 250=57.52%, 500=42.18%, 750=0.18%, 1000=0.08% 00:10:29.455 lat (msec) : 2=0.05% 00:10:29.455 cpu : usr=1.20%, sys=5.10%, ctx=3999, majf=0, minf=13 00:10:29.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.455 issued rwts: total=1949,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.455 job1: (groupid=0, jobs=1): err= 0: pid=79258: Tue Oct 29 10:59:34 2024 00:10:29.455 read: IOPS=1946, BW=7784KiB/s (7971kB/s)(7792KiB/1001msec) 00:10:29.455 slat (nsec): min=10612, max=51597, avg=12542.99, stdev=2955.59 00:10:29.455 clat (usec): min=153, max=2095, avg=274.46, stdev=59.25 00:10:29.455 lat (usec): min=168, max=2121, avg=287.00, stdev=59.59 00:10:29.455 clat percentiles (usec): 00:10:29.455 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 255], 00:10:29.455 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:10:29.455 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 310], 00:10:29.455 | 99.00th=[ 383], 99.50th=[ 515], 99.90th=[ 1270], 99.95th=[ 2089], 00:10:29.455 | 99.99th=[ 2089] 00:10:29.455 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:29.455 slat (nsec): min=15576, max=74175, avg=18644.62, stdev=4385.52 00:10:29.455 clat (usec): min=93, max=705, avg=193.83, stdev=26.48 00:10:29.455 lat (usec): min=113, max=723, avg=212.48, stdev=27.08 00:10:29.455 clat percentiles (usec): 00:10:29.455 | 1.00th=[ 127], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:10:29.455 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:10:29.455 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 223], 00:10:29.455 | 99.00th=[ 249], 99.50th=[ 306], 99.90th=[ 519], 99.95th=[ 660], 00:10:29.455 | 99.99th=[ 709] 00:10:29.455 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:29.455 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:29.455 lat (usec) : 100=0.10%, 250=57.53%, 500=42.04%, 750=0.23%, 1000=0.05% 00:10:29.455 lat (msec) : 2=0.03%, 4=0.03% 00:10:29.455 cpu : usr=1.40%, sys=4.80%, ctx=3996, majf=0, minf=11 00:10:29.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.455 issued rwts: total=1948,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.455 job2: (groupid=0, jobs=1): err= 0: pid=79259: Tue Oct 29 10:59:34 2024 00:10:29.455 read: IOPS=2744, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:10:29.455 slat (usec): min=10, max=111, avg=13.01, stdev= 4.22 00:10:29.455 clat (usec): min=131, max=372, avg=175.58, stdev=14.30 00:10:29.455 lat (usec): min=158, max=402, avg=188.59, stdev=15.36 00:10:29.455 clat percentiles (usec): 00:10:29.455 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:10:29.455 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:10:29.455 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:10:29.455 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 269], 99.95th=[ 318], 00:10:29.455 | 99.99th=[ 371] 00:10:29.455 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:29.455 slat (nsec): min=13675, max=90210, avg=19811.83, stdev=5215.11 00:10:29.455 clat (usec): min=102, max=1544, avg=133.98, stdev=29.92 00:10:29.455 lat (usec): min=120, max=1562, avg=153.79, stdev=30.31 00:10:29.456 clat percentiles (usec): 00:10:29.456 | 1.00th=[ 109], 5.00th=[ 113], 10.00th=[ 118], 20.00th=[ 122], 00:10:29.456 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:10:29.456 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 159], 00:10:29.456 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 202], 99.95th=[ 510], 00:10:29.456 | 99.99th=[ 1549] 00:10:29.456 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:29.456 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:29.456 lat (usec) : 250=99.85%, 500=0.12%, 750=0.02% 00:10:29.456 lat (msec) : 2=0.02% 00:10:29.456 cpu : usr=2.20%, sys=7.70%, ctx=5820, majf=0, minf=12 00:10:29.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.456 issued rwts: total=2747,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.456 job3: (groupid=0, jobs=1): err= 0: pid=79260: Tue Oct 29 10:59:34 2024 00:10:29.456 read: IOPS=2573, BW=10.1MiB/s (10.5MB/s)(10.1MiB/1001msec) 00:10:29.456 slat (usec): min=10, max=102, avg=13.73, stdev= 4.49 00:10:29.456 clat (usec): min=147, max=1495, avg=182.95, stdev=49.43 00:10:29.456 lat (usec): min=160, max=1518, avg=196.68, stdev=50.50 00:10:29.456 clat percentiles (usec): 00:10:29.456 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:10:29.456 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:10:29.456 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 208], 00:10:29.456 | 99.00th=[ 247], 99.50th=[ 314], 99.90th=[ 963], 99.95th=[ 1401], 00:10:29.456 | 99.99th=[ 1500] 00:10:29.456 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:29.456 slat (nsec): min=16948, max=62794, avg=20651.26, stdev=4620.57 00:10:29.456 clat (usec): min=104, max=1764, avg=136.82, stdev=36.42 00:10:29.456 lat (usec): min=121, max=1783, avg=157.47, stdev=36.57 00:10:29.456 clat percentiles (usec): 00:10:29.456 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 126], 00:10:29.456 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:10:29.456 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 161], 00:10:29.456 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 441], 99.95th=[ 766], 00:10:29.456 | 99.99th=[ 1762] 00:10:29.456 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:29.456 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:29.456 lat (usec) : 250=99.40%, 500=0.39%, 750=0.11%, 1000=0.05% 00:10:29.456 lat (msec) : 2=0.05% 00:10:29.456 cpu : usr=1.80%, sys=8.00%, ctx=5650, majf=0, minf=11 00:10:29.456 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.456 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.456 issued rwts: total=2576,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.456 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.456 00:10:29.456 Run status group 0 (all jobs): 00:10:29.456 READ: bw=36.0MiB/s (37.7MB/s), 7784KiB/s-10.7MiB/s (7971kB/s-11.2MB/s), io=36.0MiB (37.8MB), run=1001-1001msec 00:10:29.456 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:29.456 00:10:29.456 Disk stats (read/write): 00:10:29.456 nvme0n1: ios=1585/1945, merge=0/0, ticks=447/397, in_queue=844, util=87.26% 00:10:29.456 nvme0n2: ios=1536/1941, merge=0/0, ticks=427/392, in_queue=819, util=87.37% 00:10:29.456 nvme0n3: ios=2429/2560, merge=0/0, ticks=432/374, in_queue=806, util=89.20% 00:10:29.456 nvme0n4: ios=2247/2560, merge=0/0, ticks=413/377, in_queue=790, util=89.56% 00:10:29.456 10:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:29.456 [global] 00:10:29.456 thread=1 00:10:29.456 invalidate=1 00:10:29.456 rw=write 00:10:29.456 time_based=1 00:10:29.456 runtime=1 00:10:29.456 ioengine=libaio 00:10:29.456 direct=1 00:10:29.456 bs=4096 00:10:29.456 iodepth=128 00:10:29.456 norandommap=0 00:10:29.456 numjobs=1 00:10:29.456 00:10:29.456 verify_dump=1 00:10:29.456 verify_backlog=512 00:10:29.456 verify_state_save=0 00:10:29.456 do_verify=1 00:10:29.456 verify=crc32c-intel 00:10:29.456 [job0] 00:10:29.456 filename=/dev/nvme0n1 00:10:29.456 [job1] 00:10:29.456 filename=/dev/nvme0n2 00:10:29.456 [job2] 00:10:29.456 filename=/dev/nvme0n3 00:10:29.456 [job3] 00:10:29.456 filename=/dev/nvme0n4 00:10:29.456 Could not set queue depth (nvme0n1) 00:10:29.456 Could not set queue depth (nvme0n2) 00:10:29.456 Could not set queue depth (nvme0n3) 00:10:29.456 Could not set queue depth (nvme0n4) 00:10:29.456 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.456 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.456 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.456 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.456 fio-3.35 00:10:29.456 Starting 4 threads 00:10:30.834 00:10:30.834 job0: (groupid=0, jobs=1): err= 0: pid=79314: Tue Oct 29 10:59:35 2024 00:10:30.834 read: IOPS=5258, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1002msec) 00:10:30.834 slat (usec): min=7, max=3442, avg=89.69, stdev=352.33 00:10:30.834 clat (usec): min=627, max=15334, avg=11696.00, stdev=1249.44 00:10:30.834 lat (usec): min=2344, max=15371, avg=11785.69, stdev=1278.22 00:10:30.834 clat percentiles (usec): 00:10:30.834 | 1.00th=[ 6456], 5.00th=[10028], 10.00th=[10552], 20.00th=[11207], 00:10:30.834 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:10:30.834 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12911], 95.00th=[13698], 00:10:30.834 | 99.00th=[14222], 99.50th=[14484], 99.90th=[15139], 99.95th=[15270], 00:10:30.834 | 99.99th=[15270] 00:10:30.834 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:30.834 slat (usec): min=8, max=3225, avg=86.50, stdev=366.73 00:10:30.834 clat (usec): min=8510, max=15370, avg=11529.97, stdev=976.89 00:10:30.834 lat (usec): min=8531, max=15387, avg=11616.47, stdev=1029.07 00:10:30.834 clat percentiles (usec): 00:10:30.834 | 1.00th=[ 9372], 5.00th=[10421], 10.00th=[10683], 20.00th=[10814], 00:10:30.834 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:10:30.834 | 70.00th=[11731], 80.00th=[12256], 90.00th=[12780], 95.00th=[13566], 00:10:30.834 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15401], 99.95th=[15401], 00:10:30.834 | 99.99th=[15401] 00:10:30.834 bw ( KiB/s): min=22520, max=22520, per=34.90%, avg=22520.00, stdev= 0.00, samples=1 00:10:30.834 iops : min= 5630, max= 5630, avg=5630.00, stdev= 0.00, samples=1 00:10:30.834 lat (usec) : 750=0.01% 00:10:30.834 lat (msec) : 4=0.22%, 10=3.55%, 20=96.22% 00:10:30.834 cpu : usr=5.00%, sys=13.79%, ctx=538, majf=0, minf=15 00:10:30.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:30.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.834 issued rwts: total=5269,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.834 job1: (groupid=0, jobs=1): err= 0: pid=79315: Tue Oct 29 10:59:35 2024 00:10:30.834 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:10:30.834 slat (usec): min=4, max=9781, avg=208.65, stdev=913.73 00:10:30.834 clat (usec): min=15884, max=49433, avg=25582.83, stdev=5139.06 00:10:30.834 lat (usec): min=15925, max=49445, avg=25791.48, stdev=5236.64 00:10:30.834 clat percentiles (usec): 00:10:30.834 | 1.00th=[17171], 5.00th=[19530], 10.00th=[20841], 20.00th=[21103], 00:10:30.834 | 30.00th=[21627], 40.00th=[21890], 50.00th=[24249], 60.00th=[26608], 00:10:30.834 | 70.00th=[28181], 80.00th=[31065], 90.00th=[31327], 95.00th=[32900], 00:10:30.834 | 99.00th=[41681], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:10:30.834 | 99.99th=[49546] 00:10:30.834 write: IOPS=2504, BW=9.78MiB/s (10.3MB/s)(9.82MiB/1004msec); 0 zone resets 00:10:30.834 slat (usec): min=11, max=8912, avg=220.66, stdev=879.06 00:10:30.834 clat (usec): min=532, max=65104, avg=29342.84, stdev=14660.94 00:10:30.834 lat (usec): min=5024, max=65131, avg=29563.51, stdev=14754.70 00:10:30.834 clat percentiles (usec): 00:10:30.834 | 1.00th=[ 5669], 5.00th=[13960], 10.00th=[14746], 20.00th=[17171], 00:10:30.834 | 30.00th=[17957], 40.00th=[20579], 50.00th=[22938], 60.00th=[30016], 00:10:30.834 | 70.00th=[39060], 80.00th=[43779], 90.00th=[52167], 95.00th=[55837], 00:10:30.834 | 99.00th=[62129], 99.50th=[64750], 99.90th=[65274], 99.95th=[65274], 00:10:30.834 | 99.99th=[65274] 00:10:30.834 bw ( KiB/s): min= 9064, max=10032, per=14.80%, avg=9548.00, stdev=684.48, samples=2 00:10:30.834 iops : min= 2266, max= 2508, avg=2387.00, stdev=171.12, samples=2 00:10:30.834 lat (usec) : 750=0.02% 00:10:30.834 lat (msec) : 10=1.82%, 20=20.07%, 50=71.49%, 100=6.60% 00:10:30.834 cpu : usr=3.29%, sys=6.38%, ctx=271, majf=0, minf=17 00:10:30.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:30.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.834 issued rwts: total=2048,2515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.834 job2: (groupid=0, jobs=1): err= 0: pid=79316: Tue Oct 29 10:59:35 2024 00:10:30.834 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:30.834 slat (usec): min=5, max=3964, avg=102.21, stdev=411.47 00:10:30.834 clat (usec): min=10213, max=17665, avg=13467.06, stdev=1010.14 00:10:30.834 lat (usec): min=10232, max=18112, avg=13569.27, stdev=1061.92 00:10:30.834 clat percentiles (usec): 00:10:30.834 | 1.00th=[10683], 5.00th=[11731], 10.00th=[12387], 20.00th=[12911], 00:10:30.834 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13435], 00:10:30.834 | 70.00th=[13566], 80.00th=[13829], 90.00th=[15008], 95.00th=[15533], 00:10:30.834 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17433], 99.95th=[17433], 00:10:30.834 | 99.99th=[17695] 00:10:30.834 write: IOPS=4999, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1002msec); 0 zone resets 00:10:30.834 slat (usec): min=11, max=4305, avg=98.18, stdev=434.87 00:10:30.834 clat (usec): min=1929, max=17651, avg=12890.52, stdev=1462.40 00:10:30.834 lat (usec): min=1950, max=17668, avg=12988.70, stdev=1512.69 00:10:30.834 clat percentiles (usec): 00:10:30.834 | 1.00th=[ 6521], 5.00th=[11600], 10.00th=[11994], 20.00th=[12256], 00:10:30.834 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:10:30.834 | 70.00th=[13042], 80.00th=[13698], 90.00th=[14222], 95.00th=[15139], 00:10:30.834 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:10:30.834 | 99.99th=[17695] 00:10:30.834 bw ( KiB/s): min=18576, max=20439, per=30.23%, avg=19507.50, stdev=1317.34, samples=2 00:10:30.834 iops : min= 4644, max= 5109, avg=4876.50, stdev=328.80, samples=2 00:10:30.834 lat (msec) : 2=0.03%, 4=0.24%, 10=0.53%, 20=99.20% 00:10:30.834 cpu : usr=4.70%, sys=12.79%, ctx=441, majf=0, minf=7 00:10:30.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:30.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.834 issued rwts: total=4608,5009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.834 job3: (groupid=0, jobs=1): err= 0: pid=79317: Tue Oct 29 10:59:35 2024 00:10:30.834 read: IOPS=2672, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1006msec) 00:10:30.834 slat (usec): min=4, max=10312, avg=191.70, stdev=1029.80 00:10:30.834 clat (usec): min=1656, max=43461, avg=23804.53, stdev=6797.28 00:10:30.834 lat (usec): min=7306, max=43474, avg=23996.23, stdev=6774.75 00:10:30.834 clat percentiles (usec): 00:10:30.834 | 1.00th=[ 7767], 5.00th=[17171], 10.00th=[18220], 20.00th=[19268], 00:10:30.834 | 30.00th=[19530], 40.00th=[19792], 50.00th=[20317], 60.00th=[24249], 00:10:30.835 | 70.00th=[27657], 80.00th=[28181], 90.00th=[32900], 95.00th=[39584], 00:10:30.835 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:30.835 | 99.99th=[43254] 00:10:30.835 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:10:30.835 slat (usec): min=13, max=7498, avg=150.67, stdev=733.05 00:10:30.835 clat (usec): min=11707, max=38966, avg=20394.00, stdev=4618.31 00:10:30.835 lat (usec): min=14589, max=39005, avg=20544.66, stdev=4575.06 00:10:30.835 clat percentiles (usec): 00:10:30.835 | 1.00th=[13304], 5.00th=[15008], 10.00th=[15533], 20.00th=[16188], 00:10:30.835 | 30.00th=[16450], 40.00th=[19006], 50.00th=[20317], 60.00th=[20841], 00:10:30.835 | 70.00th=[21103], 80.00th=[24249], 90.00th=[26608], 95.00th=[27395], 00:10:30.835 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:10:30.835 | 99.99th=[39060] 00:10:30.835 bw ( KiB/s): min=12288, max=12288, per=19.04%, avg=12288.00, stdev= 0.00, samples=2 00:10:30.835 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:30.835 lat (msec) : 2=0.02%, 10=0.56%, 20=46.90%, 50=52.53% 00:10:30.835 cpu : usr=2.89%, sys=8.76%, ctx=183, majf=0, minf=11 00:10:30.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:30.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.835 issued rwts: total=2689,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.835 00:10:30.835 Run status group 0 (all jobs): 00:10:30.835 READ: bw=56.7MiB/s (59.5MB/s), 8159KiB/s-20.5MiB/s (8355kB/s-21.5MB/s), io=57.1MiB (59.9MB), run=1002-1006msec 00:10:30.835 WRITE: bw=63.0MiB/s (66.1MB/s), 9.78MiB/s-22.0MiB/s (10.3MB/s-23.0MB/s), io=63.4MiB (66.5MB), run=1002-1006msec 00:10:30.835 00:10:30.835 Disk stats (read/write): 00:10:30.835 nvme0n1: ios=4657/4738, merge=0/0, ticks=17239/15476, in_queue=32715, util=87.96% 00:10:30.835 nvme0n2: ios=1684/2048, merge=0/0, ticks=14246/20382, in_queue=34628, util=87.49% 00:10:30.835 nvme0n3: ios=4096/4144, merge=0/0, ticks=17526/15325, in_queue=32851, util=89.22% 00:10:30.835 nvme0n4: ios=2464/2560, merge=0/0, ticks=14148/10957, in_queue=25105, util=89.69% 00:10:30.835 10:59:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:30.835 [global] 00:10:30.835 thread=1 00:10:30.835 invalidate=1 00:10:30.835 rw=randwrite 00:10:30.835 time_based=1 00:10:30.835 runtime=1 00:10:30.835 ioengine=libaio 00:10:30.835 direct=1 00:10:30.835 bs=4096 00:10:30.835 iodepth=128 00:10:30.835 norandommap=0 00:10:30.835 numjobs=1 00:10:30.835 00:10:30.835 verify_dump=1 00:10:30.835 verify_backlog=512 00:10:30.835 verify_state_save=0 00:10:30.835 do_verify=1 00:10:30.835 verify=crc32c-intel 00:10:30.835 [job0] 00:10:30.835 filename=/dev/nvme0n1 00:10:30.835 [job1] 00:10:30.835 filename=/dev/nvme0n2 00:10:30.835 [job2] 00:10:30.835 filename=/dev/nvme0n3 00:10:30.835 [job3] 00:10:30.835 filename=/dev/nvme0n4 00:10:30.835 Could not set queue depth (nvme0n1) 00:10:30.835 Could not set queue depth (nvme0n2) 00:10:30.835 Could not set queue depth (nvme0n3) 00:10:30.835 Could not set queue depth (nvme0n4) 00:10:30.835 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.835 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.835 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.835 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.835 fio-3.35 00:10:30.835 Starting 4 threads 00:10:32.213 00:10:32.213 job0: (groupid=0, jobs=1): err= 0: pid=79378: Tue Oct 29 10:59:37 2024 00:10:32.213 read: IOPS=5436, BW=21.2MiB/s (22.3MB/s)(21.4MiB/1006msec) 00:10:32.213 slat (usec): min=6, max=11987, avg=91.79, stdev=553.42 00:10:32.213 clat (usec): min=1874, max=22588, avg=12084.04, stdev=1617.03 00:10:32.213 lat (usec): min=6260, max=31648, avg=12175.84, stdev=1631.75 00:10:32.213 clat percentiles (usec): 00:10:32.213 | 1.00th=[ 7111], 5.00th=[10159], 10.00th=[10814], 20.00th=[11469], 00:10:32.213 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:10:32.213 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13304], 95.00th=[13960], 00:10:32.213 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20841], 99.95th=[22414], 00:10:32.213 | 99.99th=[22676] 00:10:32.213 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:10:32.213 slat (usec): min=5, max=10327, avg=82.07, stdev=509.86 00:10:32.213 clat (usec): min=5839, max=21158, avg=10901.13, stdev=1145.59 00:10:32.213 lat (usec): min=7326, max=21180, avg=10983.20, stdev=1097.94 00:10:32.213 clat percentiles (usec): 00:10:32.213 | 1.00th=[ 7373], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10159], 00:10:32.213 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:10:32.213 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:10:32.213 | 99.00th=[15008], 99.50th=[15139], 99.90th=[17433], 99.95th=[17433], 00:10:32.213 | 99.99th=[21103] 00:10:32.213 bw ( KiB/s): min=21848, max=23208, per=35.48%, avg=22528.00, stdev=961.67, samples=2 00:10:32.213 iops : min= 5462, max= 5802, avg=5632.00, stdev=240.42, samples=2 00:10:32.213 lat (msec) : 2=0.01%, 10=8.83%, 20=90.73%, 50=0.43% 00:10:32.213 cpu : usr=5.07%, sys=12.74%, ctx=269, majf=0, minf=12 00:10:32.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:32.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.213 issued rwts: total=5469,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.213 job1: (groupid=0, jobs=1): err= 0: pid=79379: Tue Oct 29 10:59:37 2024 00:10:32.213 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:10:32.213 slat (usec): min=10, max=20286, avg=178.06, stdev=1218.61 00:10:32.213 clat (usec): min=15014, max=45362, avg=25729.88, stdev=4401.02 00:10:32.213 lat (usec): min=15040, max=49935, avg=25907.94, stdev=4398.83 00:10:32.213 clat percentiles (usec): 00:10:32.213 | 1.00th=[15139], 5.00th=[17957], 10.00th=[23462], 20.00th=[24773], 00:10:32.213 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25560], 00:10:32.213 | 70.00th=[25822], 80.00th=[26870], 90.00th=[28443], 95.00th=[32113], 00:10:32.213 | 99.00th=[42730], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:10:32.213 | 99.99th=[45351] 00:10:32.213 write: IOPS=2611, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1013msec); 0 zone resets 00:10:32.214 slat (usec): min=6, max=26633, avg=200.82, stdev=1358.33 00:10:32.214 clat (usec): min=2811, max=40146, avg=23609.09, stdev=3683.30 00:10:32.214 lat (usec): min=12524, max=40167, avg=23809.91, stdev=3509.42 00:10:32.214 clat percentiles (usec): 00:10:32.214 | 1.00th=[13042], 5.00th=[17171], 10.00th=[21365], 20.00th=[22676], 00:10:32.214 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:10:32.214 | 70.00th=[23987], 80.00th=[24511], 90.00th=[24773], 95.00th=[28967], 00:10:32.214 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:10:32.214 | 99.99th=[40109] 00:10:32.214 bw ( KiB/s): min= 8712, max=11768, per=16.13%, avg=10240.00, stdev=2160.92, samples=2 00:10:32.214 iops : min= 2178, max= 2942, avg=2560.00, stdev=540.23, samples=2 00:10:32.214 lat (msec) : 4=0.02%, 20=7.24%, 50=92.74% 00:10:32.214 cpu : usr=2.17%, sys=7.91%, ctx=110, majf=0, minf=15 00:10:32.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:32.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.214 issued rwts: total=2560,2645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.214 job2: (groupid=0, jobs=1): err= 0: pid=79380: Tue Oct 29 10:59:37 2024 00:10:32.214 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:10:32.214 slat (usec): min=5, max=11941, avg=101.72, stdev=640.51 00:10:32.214 clat (usec): min=5659, max=25439, avg=13981.95, stdev=2046.16 00:10:32.214 lat (usec): min=5702, max=25908, avg=14083.67, stdev=2069.22 00:10:32.214 clat percentiles (usec): 00:10:32.214 | 1.00th=[ 8717], 5.00th=[11469], 10.00th=[13042], 20.00th=[13435], 00:10:32.214 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:10:32.214 | 70.00th=[14222], 80.00th=[14353], 90.00th=[14746], 95.00th=[15139], 00:10:32.214 | 99.00th=[22938], 99.50th=[23987], 99.90th=[25297], 99.95th=[25297], 00:10:32.214 | 99.99th=[25560] 00:10:32.214 write: IOPS=5028, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1005msec); 0 zone resets 00:10:32.214 slat (usec): min=5, max=10341, avg=98.47, stdev=575.11 00:10:32.214 clat (usec): min=613, max=25370, avg=12458.48, stdev=1736.25 00:10:32.214 lat (usec): min=3601, max=25378, avg=12556.95, stdev=1663.79 00:10:32.214 clat percentiles (usec): 00:10:32.214 | 1.00th=[ 5276], 5.00th=[ 9503], 10.00th=[11207], 20.00th=[11731], 00:10:32.214 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:10:32.214 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13829], 95.00th=[14484], 00:10:32.214 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:10:32.214 | 99.99th=[25297] 00:10:32.214 bw ( KiB/s): min=18928, max=20480, per=31.03%, avg=19704.00, stdev=1097.43, samples=2 00:10:32.214 iops : min= 4732, max= 5120, avg=4926.00, stdev=274.36, samples=2 00:10:32.214 lat (usec) : 750=0.01% 00:10:32.214 lat (msec) : 4=0.05%, 10=5.15%, 20=92.93%, 50=1.85% 00:10:32.214 cpu : usr=4.08%, sys=12.95%, ctx=258, majf=0, minf=9 00:10:32.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:32.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.214 issued rwts: total=4608,5054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.214 job3: (groupid=0, jobs=1): err= 0: pid=79381: Tue Oct 29 10:59:37 2024 00:10:32.214 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:10:32.214 slat (usec): min=7, max=22550, avg=197.48, stdev=1486.67 00:10:32.214 clat (usec): min=15479, max=45568, avg=26398.76, stdev=3150.86 00:10:32.214 lat (usec): min=15503, max=47884, avg=26596.24, stdev=3328.83 00:10:32.214 clat percentiles (usec): 00:10:32.214 | 1.00th=[19268], 5.00th=[21890], 10.00th=[23987], 20.00th=[25035], 00:10:32.214 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:10:32.214 | 70.00th=[26608], 80.00th=[29230], 90.00th=[30540], 95.00th=[31589], 00:10:32.214 | 99.00th=[35390], 99.50th=[36963], 99.90th=[41157], 99.95th=[41681], 00:10:32.214 | 99.99th=[45351] 00:10:32.214 write: IOPS=2721, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1011msec); 0 zone resets 00:10:32.214 slat (usec): min=7, max=14828, avg=175.07, stdev=1181.35 00:10:32.214 clat (usec): min=1549, max=35147, avg=22035.96, stdev=3920.65 00:10:32.214 lat (usec): min=11452, max=35176, avg=22211.04, stdev=3782.20 00:10:32.214 clat percentiles (usec): 00:10:32.214 | 1.00th=[11863], 5.00th=[12649], 10.00th=[14877], 20.00th=[19006], 00:10:32.214 | 30.00th=[22152], 40.00th=[22938], 50.00th=[23462], 60.00th=[23725], 00:10:32.214 | 70.00th=[23987], 80.00th=[24511], 90.00th=[24773], 95.00th=[27919], 00:10:32.214 | 99.00th=[28705], 99.50th=[28705], 99.90th=[30802], 99.95th=[31327], 00:10:32.214 | 99.99th=[35390] 00:10:32.214 bw ( KiB/s): min= 8768, max=12280, per=16.57%, avg=10524.00, stdev=2483.36, samples=2 00:10:32.214 iops : min= 2192, max= 3070, avg=2631.00, stdev=620.84, samples=2 00:10:32.214 lat (msec) : 2=0.02%, 10=0.02%, 20=13.42%, 50=86.54% 00:10:32.214 cpu : usr=2.18%, sys=7.92%, ctx=117, majf=0, minf=15 00:10:32.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:32.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.214 issued rwts: total=2560,2751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.214 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.214 00:10:32.214 Run status group 0 (all jobs): 00:10:32.214 READ: bw=58.6MiB/s (61.4MB/s), 9.87MiB/s-21.2MiB/s (10.4MB/s-22.3MB/s), io=59.4MiB (62.2MB), run=1005-1013msec 00:10:32.214 WRITE: bw=62.0MiB/s (65.0MB/s), 10.2MiB/s-21.9MiB/s (10.7MB/s-22.9MB/s), io=62.8MiB (65.9MB), run=1005-1013msec 00:10:32.214 00:10:32.214 Disk stats (read/write): 00:10:32.214 nvme0n1: ios=4658/4822, merge=0/0, ticks=52684/48078, in_queue=100762, util=87.37% 00:10:32.214 nvme0n2: ios=2097/2310, merge=0/0, ticks=50522/52899, in_queue=103421, util=87.99% 00:10:32.214 nvme0n3: ios=4087/4104, merge=0/0, ticks=54071/47767, in_queue=101838, util=89.22% 00:10:32.214 nvme0n4: ios=2048/2432, merge=0/0, ticks=51874/51820, in_queue=103694, util=89.68% 00:10:32.214 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:32.214 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=79394 00:10:32.214 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:32.214 10:59:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:32.214 [global] 00:10:32.214 thread=1 00:10:32.214 invalidate=1 00:10:32.214 rw=read 00:10:32.214 time_based=1 00:10:32.214 runtime=10 00:10:32.214 ioengine=libaio 00:10:32.214 direct=1 00:10:32.214 bs=4096 00:10:32.214 iodepth=1 00:10:32.214 norandommap=1 00:10:32.214 numjobs=1 00:10:32.214 00:10:32.214 [job0] 00:10:32.214 filename=/dev/nvme0n1 00:10:32.214 [job1] 00:10:32.214 filename=/dev/nvme0n2 00:10:32.214 [job2] 00:10:32.214 filename=/dev/nvme0n3 00:10:32.214 [job3] 00:10:32.214 filename=/dev/nvme0n4 00:10:32.214 Could not set queue depth (nvme0n1) 00:10:32.214 Could not set queue depth (nvme0n2) 00:10:32.214 Could not set queue depth (nvme0n3) 00:10:32.214 Could not set queue depth (nvme0n4) 00:10:32.214 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.214 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.214 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.214 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.214 fio-3.35 00:10:32.214 Starting 4 threads 00:10:35.501 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:35.501 fio: pid=79442, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:35.501 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=57450496, buflen=4096 00:10:35.501 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:35.759 fio: pid=79441, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:35.759 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=63635456, buflen=4096 00:10:35.760 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.760 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:36.018 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=56729600, buflen=4096 00:10:36.018 fio: pid=79439, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:36.018 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.018 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:36.278 fio: pid=79440, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:36.278 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=63488000, buflen=4096 00:10:36.278 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.278 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:36.278 00:10:36.278 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79439: Tue Oct 29 10:59:41 2024 00:10:36.278 read: IOPS=3900, BW=15.2MiB/s (16.0MB/s)(54.1MiB/3551msec) 00:10:36.278 slat (usec): min=10, max=11842, avg=15.71, stdev=173.55 00:10:36.278 clat (usec): min=133, max=2547, avg=239.45, stdev=63.45 00:10:36.278 lat (usec): min=146, max=12010, avg=255.16, stdev=184.76 00:10:36.278 clat percentiles (usec): 00:10:36.278 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 180], 00:10:36.278 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:10:36.278 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:10:36.278 | 99.00th=[ 318], 99.50th=[ 392], 99.90th=[ 848], 99.95th=[ 1090], 00:10:36.278 | 99.99th=[ 2180] 00:10:36.278 bw ( KiB/s): min=13960, max=15312, per=23.53%, avg=14493.33, stdev=513.02, samples=6 00:10:36.278 iops : min= 3490, max= 3828, avg=3623.33, stdev=128.25, samples=6 00:10:36.278 lat (usec) : 250=48.29%, 500=51.47%, 750=0.11%, 1000=0.07% 00:10:36.278 lat (msec) : 2=0.04%, 4=0.02% 00:10:36.278 cpu : usr=1.10%, sys=4.45%, ctx=13855, majf=0, minf=1 00:10:36.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.278 issued rwts: total=13851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.278 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79440: Tue Oct 29 10:59:41 2024 00:10:36.278 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(60.5MiB/3826msec) 00:10:36.278 slat (usec): min=9, max=10405, avg=16.32, stdev=173.24 00:10:36.278 clat (usec): min=129, max=2063, avg=229.22, stdev=60.54 00:10:36.278 lat (usec): min=141, max=10642, avg=245.54, stdev=183.28 00:10:36.278 clat percentiles (usec): 00:10:36.278 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 157], 00:10:36.278 | 30.00th=[ 219], 40.00th=[ 237], 50.00th=[ 247], 60.00th=[ 255], 00:10:36.278 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:10:36.278 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 586], 99.95th=[ 922], 00:10:36.278 | 99.99th=[ 1958] 00:10:36.278 bw ( KiB/s): min=14216, max=20915, per=25.29%, avg=15574.14, stdev=2387.96, samples=7 00:10:36.278 iops : min= 3554, max= 5228, avg=3893.43, stdev=596.71, samples=7 00:10:36.278 lat (usec) : 250=53.71%, 500=46.15%, 750=0.07%, 1000=0.02% 00:10:36.278 lat (msec) : 2=0.03%, 4=0.01% 00:10:36.278 cpu : usr=1.07%, sys=4.71%, ctx=15510, majf=0, minf=2 00:10:36.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.278 issued rwts: total=15501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.278 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79441: Tue Oct 29 10:59:41 2024 00:10:36.278 read: IOPS=4732, BW=18.5MiB/s (19.4MB/s)(60.7MiB/3283msec) 00:10:36.278 slat (usec): min=7, max=14532, avg=14.58, stdev=132.29 00:10:36.278 clat (usec): min=143, max=6074, avg=195.55, stdev=85.90 00:10:36.278 lat (usec): min=154, max=14729, avg=210.13, stdev=157.91 00:10:36.278 clat percentiles (usec): 00:10:36.278 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:10:36.278 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 186], 00:10:36.278 | 70.00th=[ 196], 80.00th=[ 235], 90.00th=[ 255], 95.00th=[ 269], 00:10:36.278 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 988], 99.95th=[ 1795], 00:10:36.278 | 99.99th=[ 4047] 00:10:36.278 bw ( KiB/s): min=14880, max=21648, per=30.72%, avg=18918.67, stdev=2993.67, samples=6 00:10:36.278 iops : min= 3720, max= 5412, avg=4729.67, stdev=748.42, samples=6 00:10:36.278 lat (usec) : 250=87.37%, 500=12.45%, 750=0.03%, 1000=0.05% 00:10:36.278 lat (msec) : 2=0.06%, 4=0.02%, 10=0.01% 00:10:36.278 cpu : usr=1.52%, sys=5.58%, ctx=15548, majf=0, minf=2 00:10:36.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.278 issued rwts: total=15537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.278 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79442: Tue Oct 29 10:59:41 2024 00:10:36.278 read: IOPS=4703, BW=18.4MiB/s (19.3MB/s)(54.8MiB/2982msec) 00:10:36.278 slat (usec): min=7, max=154, avg=13.35, stdev= 4.44 00:10:36.278 clat (usec): min=137, max=1553, avg=197.93, stdev=40.92 00:10:36.278 lat (usec): min=149, max=1564, avg=211.28, stdev=40.83 00:10:36.278 clat percentiles (usec): 00:10:36.278 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:10:36.278 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 192], 00:10:36.278 | 70.00th=[ 206], 80.00th=[ 239], 90.00th=[ 260], 95.00th=[ 273], 00:10:36.278 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 355], 99.95th=[ 461], 00:10:36.278 | 99.99th=[ 750] 00:10:36.278 bw ( KiB/s): min=15320, max=21184, per=31.92%, avg=19659.20, stdev=2444.60, samples=5 00:10:36.278 iops : min= 3830, max= 5296, avg=4914.80, stdev=611.15, samples=5 00:10:36.278 lat (usec) : 250=85.76%, 500=14.19%, 750=0.04% 00:10:36.278 lat (msec) : 2=0.01% 00:10:36.278 cpu : usr=1.14%, sys=5.97%, ctx=14029, majf=0, minf=1 00:10:36.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.278 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.278 issued rwts: total=14027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.278 00:10:36.278 Run status group 0 (all jobs): 00:10:36.278 READ: bw=60.1MiB/s (63.1MB/s), 15.2MiB/s-18.5MiB/s (16.0MB/s-19.4MB/s), io=230MiB (241MB), run=2982-3826msec 00:10:36.278 00:10:36.278 Disk stats (read/write): 00:10:36.278 nvme0n1: ios=12753/0, merge=0/0, ticks=3156/0, in_queue=3156, util=95.19% 00:10:36.278 nvme0n2: ios=14213/0, merge=0/0, ticks=3435/0, in_queue=3435, util=95.50% 00:10:36.278 nvme0n3: ios=14642/0, merge=0/0, ticks=2860/0, in_queue=2860, util=95.84% 00:10:36.278 nvme0n4: ios=13599/0, merge=0/0, ticks=2660/0, in_queue=2660, util=96.73% 00:10:36.538 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.538 10:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:36.797 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.797 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:37.056 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.056 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:37.315 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.315 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:37.574 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:37.574 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 79394 00:10:37.574 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:37.574 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.574 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.574 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:37.574 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:37.574 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.574 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:37.574 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.574 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:37.574 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:37.574 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:37.574 nvmf hotplug test: fio failed as expected 00:10:37.574 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.833 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.093 rmmod nvme_tcp 00:10:38.093 rmmod nvme_fabrics 00:10:38.093 rmmod nvme_keyring 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 79018 ']' 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 79018 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 79018 ']' 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 79018 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79018 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:38.093 killing process with pid 79018 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79018' 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 79018 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 79018 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:38.093 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:38.416 00:10:38.416 real 0m19.402s 00:10:38.416 user 1m12.518s 00:10:38.416 sys 0m10.302s 00:10:38.416 ************************************ 00:10:38.416 END TEST nvmf_fio_target 00:10:38.416 ************************************ 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.416 ************************************ 00:10:38.416 START TEST nvmf_bdevio 00:10:38.416 ************************************ 00:10:38.416 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:38.676 * Looking for test storage... 00:10:38.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:38.676 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:38.676 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:38.676 10:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.676 --rc genhtml_branch_coverage=1 00:10:38.676 --rc genhtml_function_coverage=1 00:10:38.676 --rc genhtml_legend=1 00:10:38.676 --rc geninfo_all_blocks=1 00:10:38.676 --rc geninfo_unexecuted_blocks=1 00:10:38.676 00:10:38.676 ' 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.676 --rc genhtml_branch_coverage=1 00:10:38.676 --rc genhtml_function_coverage=1 00:10:38.676 --rc genhtml_legend=1 00:10:38.676 --rc geninfo_all_blocks=1 00:10:38.676 --rc geninfo_unexecuted_blocks=1 00:10:38.676 00:10:38.676 ' 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.676 --rc genhtml_branch_coverage=1 00:10:38.676 --rc genhtml_function_coverage=1 00:10:38.676 --rc genhtml_legend=1 00:10:38.676 --rc geninfo_all_blocks=1 00:10:38.676 --rc geninfo_unexecuted_blocks=1 00:10:38.676 00:10:38.676 ' 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:38.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.676 --rc genhtml_branch_coverage=1 00:10:38.676 --rc genhtml_function_coverage=1 00:10:38.676 --rc genhtml_legend=1 00:10:38.676 --rc geninfo_all_blocks=1 00:10:38.676 --rc geninfo_unexecuted_blocks=1 00:10:38.676 00:10:38.676 ' 00:10:38.676 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.677 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:38.677 Cannot find device "nvmf_init_br" 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:38.677 Cannot find device "nvmf_init_br2" 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:38.677 Cannot find device "nvmf_tgt_br" 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:38.677 Cannot find device "nvmf_tgt_br2" 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:38.677 Cannot find device "nvmf_init_br" 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:38.677 Cannot find device "nvmf_init_br2" 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:38.677 Cannot find device "nvmf_tgt_br" 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:38.677 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:38.677 Cannot find device "nvmf_tgt_br2" 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:38.937 Cannot find device "nvmf_br" 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:38.937 Cannot find device "nvmf_init_if" 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:38.937 Cannot find device "nvmf_init_if2" 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:38.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:38.937 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:10:38.937 00:10:38.937 --- 10.0.0.3 ping statistics --- 00:10:38.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.937 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:10:38.937 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:39.196 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:39.196 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:10:39.196 00:10:39.196 --- 10.0.0.4 ping statistics --- 00:10:39.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.196 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:39.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:39.196 00:10:39.196 --- 10.0.0.1 ping statistics --- 00:10:39.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.196 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:39.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:39.196 00:10:39.196 --- 10.0.0.2 ping statistics --- 00:10:39.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.196 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=79765 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 79765 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 79765 ']' 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:39.196 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.196 [2024-10-29 10:59:44.543729] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:10:39.196 [2024-10-29 10:59:44.543825] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.455 [2024-10-29 10:59:44.699843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.455 [2024-10-29 10:59:44.724577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.455 [2024-10-29 10:59:44.724650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.455 [2024-10-29 10:59:44.724663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.455 [2024-10-29 10:59:44.724674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.455 [2024-10-29 10:59:44.724683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.455 [2024-10-29 10:59:44.725953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:39.455 [2024-10-29 10:59:44.726000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:39.455 [2024-10-29 10:59:44.726143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:39.455 [2024-10-29 10:59:44.726151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.455 [2024-10-29 10:59:44.758739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 [2024-10-29 10:59:44.849468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 Malloc0 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:39.455 [2024-10-29 10:59:44.910065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.455 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.455 { 00:10:39.456 "params": { 00:10:39.456 "name": "Nvme$subsystem", 00:10:39.456 "trtype": "$TEST_TRANSPORT", 00:10:39.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.456 "adrfam": "ipv4", 00:10:39.456 "trsvcid": "$NVMF_PORT", 00:10:39.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.456 "hdgst": ${hdgst:-false}, 00:10:39.456 "ddgst": ${ddgst:-false} 00:10:39.456 }, 00:10:39.456 "method": "bdev_nvme_attach_controller" 00:10:39.456 } 00:10:39.456 EOF 00:10:39.456 )") 00:10:39.456 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:39.456 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:39.456 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:39.456 10:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.456 "params": { 00:10:39.456 "name": "Nvme1", 00:10:39.456 "trtype": "tcp", 00:10:39.456 "traddr": "10.0.0.3", 00:10:39.456 "adrfam": "ipv4", 00:10:39.456 "trsvcid": "4420", 00:10:39.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.456 "hdgst": false, 00:10:39.456 "ddgst": false 00:10:39.456 }, 00:10:39.456 "method": "bdev_nvme_attach_controller" 00:10:39.456 }' 00:10:39.714 [2024-10-29 10:59:44.973931] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:10:39.714 [2024-10-29 10:59:44.974027] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79788 ] 00:10:39.714 [2024-10-29 10:59:45.131234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:39.714 [2024-10-29 10:59:45.158089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.714 [2024-10-29 10:59:45.158224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.714 [2024-10-29 10:59:45.158232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.714 [2024-10-29 10:59:45.200454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.973 I/O targets: 00:10:39.973 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:39.973 00:10:39.973 00:10:39.973 CUnit - A unit testing framework for C - Version 2.1-3 00:10:39.973 http://cunit.sourceforge.net/ 00:10:39.973 00:10:39.973 00:10:39.973 Suite: bdevio tests on: Nvme1n1 00:10:39.973 Test: blockdev write read block ...passed 00:10:39.973 Test: blockdev write zeroes read block ...passed 00:10:39.973 Test: blockdev write zeroes read no split ...passed 00:10:39.973 Test: blockdev write zeroes read split ...passed 00:10:39.973 Test: blockdev write zeroes read split partial ...passed 00:10:39.973 Test: blockdev reset ...[2024-10-29 10:59:45.331989] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:39.973 [2024-10-29 10:59:45.332363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf948d0 (9): Bad file descriptor 00:10:39.973 passed 00:10:39.973 Test: blockdev write read 8 blocks ...[2024-10-29 10:59:45.346594] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:39.973 passed 00:10:39.973 Test: blockdev write read size > 128k ...passed 00:10:39.973 Test: blockdev write read invalid size ...passed 00:10:39.973 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:39.973 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:39.973 Test: blockdev write read max offset ...passed 00:10:39.973 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:39.973 Test: blockdev writev readv 8 blocks ...passed 00:10:39.974 Test: blockdev writev readv 30 x 1block ...passed 00:10:39.974 Test: blockdev writev readv block ...passed 00:10:39.974 Test: blockdev writev readv size > 128k ...passed 00:10:39.974 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:39.974 Test: blockdev comparev and writev ...[2024-10-29 10:59:45.354409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.974 [2024-10-29 10:59:45.354474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:39.974 [2024-10-29 10:59:45.354496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.974 [2024-10-29 10:59:45.354508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:39.974 [2024-10-29 10:59:45.354790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.974 [2024-10-29 10:59:45.354808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:39.974 [2024-10-29 10:59:45.354825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.974 [2024-10-29 10:59:45.354835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:39.974 [2024-10-29 10:59:45.355113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.974 [2024-10-29 10:59:45.355130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:39.974 [2024-10-29 10:59:45.355147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.974 [2024-10-29 10:59:45.355157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:39.974 [2024-10-29 10:59:45.355557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.974 [2024-10-29 10:59:45.355721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:39.974 [2024-10-29 10:59:45.355925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:39.974 [2024-10-29 10:59:45.356060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:39.974 passed 00:10:39.974 Test: blockdev nvme passthru rw ...passed 00:10:39.974 Test: blockdev nvme passthru vendor specific ...[2024-10-29 10:59:45.357393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.974 [2024-10-29 10:59:45.357423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:39.974 [2024-10-29 10:59:45.357544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.974 [2024-10-29 10:59:45.357561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:39.974 passed 00:10:39.974 Test: blockdev nvme admin passthru ...[2024-10-29 10:59:45.357663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.974 [2024-10-29 10:59:45.357686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:39.974 [2024-10-29 10:59:45.357789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:39.974 [2024-10-29 10:59:45.357805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:39.974 passed 00:10:39.974 Test: blockdev copy ...passed 00:10:39.974 00:10:39.974 Run Summary: Type Total Ran Passed Failed Inactive 00:10:39.974 suites 1 1 n/a 0 0 00:10:39.974 tests 23 23 23 0 0 00:10:39.974 asserts 152 152 152 0 n/a 00:10:39.974 00:10:39.974 Elapsed time = 0.158 seconds 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:40.233 rmmod nvme_tcp 00:10:40.233 rmmod nvme_fabrics 00:10:40.233 rmmod nvme_keyring 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 79765 ']' 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 79765 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 79765 ']' 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 79765 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79765 00:10:40.233 killing process with pid 79765 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79765' 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 79765 00:10:40.233 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 79765 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.492 10:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:40.751 00:10:40.751 real 0m2.196s 00:10:40.751 user 0m5.417s 00:10:40.751 sys 0m0.756s 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:40.751 ************************************ 00:10:40.751 END TEST nvmf_bdevio 00:10:40.751 ************************************ 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:40.751 00:10:40.751 real 2m28.648s 00:10:40.751 user 6m25.048s 00:10:40.751 sys 0m53.554s 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.751 ************************************ 00:10:40.751 END TEST nvmf_target_core 00:10:40.751 ************************************ 00:10:40.751 10:59:46 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:40.751 10:59:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:40.751 10:59:46 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:40.751 10:59:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.751 ************************************ 00:10:40.751 START TEST nvmf_target_extra 00:10:40.751 ************************************ 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:40.751 * Looking for test storage... 00:10:40.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:40.751 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:41.010 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.011 --rc genhtml_branch_coverage=1 00:10:41.011 --rc genhtml_function_coverage=1 00:10:41.011 --rc genhtml_legend=1 00:10:41.011 --rc geninfo_all_blocks=1 00:10:41.011 --rc geninfo_unexecuted_blocks=1 00:10:41.011 00:10:41.011 ' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.011 --rc genhtml_branch_coverage=1 00:10:41.011 --rc genhtml_function_coverage=1 00:10:41.011 --rc genhtml_legend=1 00:10:41.011 --rc geninfo_all_blocks=1 00:10:41.011 --rc geninfo_unexecuted_blocks=1 00:10:41.011 00:10:41.011 ' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.011 --rc genhtml_branch_coverage=1 00:10:41.011 --rc genhtml_function_coverage=1 00:10:41.011 --rc genhtml_legend=1 00:10:41.011 --rc geninfo_all_blocks=1 00:10:41.011 --rc geninfo_unexecuted_blocks=1 00:10:41.011 00:10:41.011 ' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.011 --rc genhtml_branch_coverage=1 00:10:41.011 --rc genhtml_function_coverage=1 00:10:41.011 --rc genhtml_legend=1 00:10:41.011 --rc geninfo_all_blocks=1 00:10:41.011 --rc geninfo_unexecuted_blocks=1 00:10:41.011 00:10:41.011 ' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:41.011 ************************************ 00:10:41.011 START TEST nvmf_auth_target 00:10:41.011 ************************************ 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:41.011 * Looking for test storage... 00:10:41.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:41.011 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.012 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:41.012 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:41.012 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:41.270 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:41.270 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:41.270 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:41.270 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:41.270 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:41.270 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:41.270 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:41.270 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:41.270 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:41.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.270 --rc genhtml_branch_coverage=1 00:10:41.270 --rc genhtml_function_coverage=1 00:10:41.271 --rc genhtml_legend=1 00:10:41.271 --rc geninfo_all_blocks=1 00:10:41.271 --rc geninfo_unexecuted_blocks=1 00:10:41.271 00:10:41.271 ' 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:41.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.271 --rc genhtml_branch_coverage=1 00:10:41.271 --rc genhtml_function_coverage=1 00:10:41.271 --rc genhtml_legend=1 00:10:41.271 --rc geninfo_all_blocks=1 00:10:41.271 --rc geninfo_unexecuted_blocks=1 00:10:41.271 00:10:41.271 ' 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:41.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.271 --rc genhtml_branch_coverage=1 00:10:41.271 --rc genhtml_function_coverage=1 00:10:41.271 --rc genhtml_legend=1 00:10:41.271 --rc geninfo_all_blocks=1 00:10:41.271 --rc geninfo_unexecuted_blocks=1 00:10:41.271 00:10:41.271 ' 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:41.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:41.271 --rc genhtml_branch_coverage=1 00:10:41.271 --rc genhtml_function_coverage=1 00:10:41.271 --rc genhtml_legend=1 00:10:41.271 --rc geninfo_all_blocks=1 00:10:41.271 --rc geninfo_unexecuted_blocks=1 00:10:41.271 00:10:41.271 ' 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:41.271 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:41.271 Cannot find device "nvmf_init_br" 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:41.271 Cannot find device "nvmf_init_br2" 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:41.271 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:41.271 Cannot find device "nvmf_tgt_br" 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:41.272 Cannot find device "nvmf_tgt_br2" 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:41.272 Cannot find device "nvmf_init_br" 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:41.272 Cannot find device "nvmf_init_br2" 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:41.272 Cannot find device "nvmf_tgt_br" 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:41.272 Cannot find device "nvmf_tgt_br2" 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:41.272 Cannot find device "nvmf_br" 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:41.272 Cannot find device "nvmf_init_if" 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:41.272 Cannot find device "nvmf_init_if2" 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:41.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:41.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:41.272 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:41.531 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.531 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:10:41.531 00:10:41.531 --- 10.0.0.3 ping statistics --- 00:10:41.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.531 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:41.531 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:41.531 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:10:41.531 00:10:41.531 --- 10.0.0.4 ping statistics --- 00:10:41.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.531 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:41.531 00:10:41.531 --- 10.0.0.1 ping statistics --- 00:10:41.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.531 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:41.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:10:41.531 00:10:41.531 --- 10.0.0.2 ping statistics --- 00:10:41.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.531 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=80074 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 80074 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 80074 ']' 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:41.531 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=80103 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e3ba9d31049108b54930e4f7fedc826ea16221e8d9c28f8b 00:10:41.790 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VsZ 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e3ba9d31049108b54930e4f7fedc826ea16221e8d9c28f8b 0 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e3ba9d31049108b54930e4f7fedc826ea16221e8d9c28f8b 0 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e3ba9d31049108b54930e4f7fedc826ea16221e8d9c28f8b 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VsZ 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VsZ 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.VsZ 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0fc7dce510c9f35aaca0f84aa29e1912c339a8892d72aa4643819441c33bc51d 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Gst 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0fc7dce510c9f35aaca0f84aa29e1912c339a8892d72aa4643819441c33bc51d 3 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0fc7dce510c9f35aaca0f84aa29e1912c339a8892d72aa4643819441c33bc51d 3 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0fc7dce510c9f35aaca0f84aa29e1912c339a8892d72aa4643819441c33bc51d 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Gst 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Gst 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Gst 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=247587994397e5df24ad8cd903d445dd 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rVE 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 247587994397e5df24ad8cd903d445dd 1 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 247587994397e5df24ad8cd903d445dd 1 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=247587994397e5df24ad8cd903d445dd 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rVE 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rVE 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.rVE 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9c9884c831f947a6ef6390717347280b8c57345246c1c0e0 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.g6H 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9c9884c831f947a6ef6390717347280b8c57345246c1c0e0 2 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9c9884c831f947a6ef6390717347280b8c57345246c1c0e0 2 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.050 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9c9884c831f947a6ef6390717347280b8c57345246c1c0e0 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.g6H 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.g6H 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.g6H 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.051 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:42.310 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:42.310 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:42.310 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f53b8a7d4ae2a8485fe0384ead5cc8a386ee53699225831c 00:10:42.310 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.8gv 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f53b8a7d4ae2a8485fe0384ead5cc8a386ee53699225831c 2 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f53b8a7d4ae2a8485fe0384ead5cc8a386ee53699225831c 2 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f53b8a7d4ae2a8485fe0384ead5cc8a386ee53699225831c 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.8gv 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.8gv 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.8gv 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=420e06237d5419c0020a3b3e479843a6 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Jx7 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 420e06237d5419c0020a3b3e479843a6 1 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 420e06237d5419c0020a3b3e479843a6 1 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=420e06237d5419c0020a3b3e479843a6 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Jx7 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Jx7 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Jx7 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b46bd33c4f29d50fa7993b11b6149ad9faa9c76ff73938eb1da84d6779661cc3 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Q2y 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b46bd33c4f29d50fa7993b11b6149ad9faa9c76ff73938eb1da84d6779661cc3 3 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b46bd33c4f29d50fa7993b11b6149ad9faa9c76ff73938eb1da84d6779661cc3 3 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b46bd33c4f29d50fa7993b11b6149ad9faa9c76ff73938eb1da84d6779661cc3 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Q2y 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Q2y 00:10:42.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Q2y 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 80074 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 80074 ']' 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:42.311 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 80103 /var/tmp/host.sock 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 80103 ']' 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.879 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.138 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.138 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:43.138 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VsZ 00:10:43.138 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.138 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.138 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.138 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.VsZ 00:10:43.138 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.VsZ 00:10:43.398 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Gst ]] 00:10:43.398 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gst 00:10:43.398 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.398 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.398 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.398 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gst 00:10:43.398 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gst 00:10:43.657 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:43.657 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rVE 00:10:43.657 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.657 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.657 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.657 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.rVE 00:10:43.657 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.rVE 00:10:43.916 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.g6H ]] 00:10:43.916 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.g6H 00:10:43.916 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.916 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.916 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.916 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.g6H 00:10:43.916 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.g6H 00:10:44.175 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:44.175 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8gv 00:10:44.175 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.175 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.175 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.175 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.8gv 00:10:44.175 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.8gv 00:10:44.435 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Jx7 ]] 00:10:44.435 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Jx7 00:10:44.435 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.435 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.435 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.435 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Jx7 00:10:44.435 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Jx7 00:10:44.709 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:44.709 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Q2y 00:10:44.709 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.709 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.709 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.709 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Q2y 00:10:44.709 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Q2y 00:10:44.966 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:44.966 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:44.966 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.966 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.966 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:44.966 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.223 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.481 00:10:45.481 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.481 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.481 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.738 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.738 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.738 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.738 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.738 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.738 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.738 { 00:10:45.738 "cntlid": 1, 00:10:45.738 "qid": 0, 00:10:45.738 "state": "enabled", 00:10:45.738 "thread": "nvmf_tgt_poll_group_000", 00:10:45.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:10:45.738 "listen_address": { 00:10:45.738 "trtype": "TCP", 00:10:45.738 "adrfam": "IPv4", 00:10:45.738 "traddr": "10.0.0.3", 00:10:45.738 "trsvcid": "4420" 00:10:45.738 }, 00:10:45.738 "peer_address": { 00:10:45.738 "trtype": "TCP", 00:10:45.738 "adrfam": "IPv4", 00:10:45.738 "traddr": "10.0.0.1", 00:10:45.738 "trsvcid": "34672" 00:10:45.738 }, 00:10:45.738 "auth": { 00:10:45.738 "state": "completed", 00:10:45.738 "digest": "sha256", 00:10:45.738 "dhgroup": "null" 00:10:45.738 } 00:10:45.738 } 00:10:45.738 ]' 00:10:45.738 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.996 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.996 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.996 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:45.996 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.996 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.996 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.996 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.310 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:10:46.310 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:10:50.493 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.493 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:50.494 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.494 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.494 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.494 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.494 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:50.494 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.752 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:51.011 00:10:51.270 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.270 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.270 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.531 { 00:10:51.531 "cntlid": 3, 00:10:51.531 "qid": 0, 00:10:51.531 "state": "enabled", 00:10:51.531 "thread": "nvmf_tgt_poll_group_000", 00:10:51.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:10:51.531 "listen_address": { 00:10:51.531 "trtype": "TCP", 00:10:51.531 "adrfam": "IPv4", 00:10:51.531 "traddr": "10.0.0.3", 00:10:51.531 "trsvcid": "4420" 00:10:51.531 }, 00:10:51.531 "peer_address": { 00:10:51.531 "trtype": "TCP", 00:10:51.531 "adrfam": "IPv4", 00:10:51.531 "traddr": "10.0.0.1", 00:10:51.531 "trsvcid": "42256" 00:10:51.531 }, 00:10:51.531 "auth": { 00:10:51.531 "state": "completed", 00:10:51.531 "digest": "sha256", 00:10:51.531 "dhgroup": "null" 00:10:51.531 } 00:10:51.531 } 00:10:51.531 ]' 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:51.531 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.531 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.531 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.531 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.790 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:10:51.790 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:10:52.726 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.726 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:52.726 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.726 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.726 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.726 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.726 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:52.726 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.985 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:53.243 00:10:53.243 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.243 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.243 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.501 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.501 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.501 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.501 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.502 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.502 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.502 { 00:10:53.502 "cntlid": 5, 00:10:53.502 "qid": 0, 00:10:53.502 "state": "enabled", 00:10:53.502 "thread": "nvmf_tgt_poll_group_000", 00:10:53.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:10:53.502 "listen_address": { 00:10:53.502 "trtype": "TCP", 00:10:53.502 "adrfam": "IPv4", 00:10:53.502 "traddr": "10.0.0.3", 00:10:53.502 "trsvcid": "4420" 00:10:53.502 }, 00:10:53.502 "peer_address": { 00:10:53.502 "trtype": "TCP", 00:10:53.502 "adrfam": "IPv4", 00:10:53.502 "traddr": "10.0.0.1", 00:10:53.502 "trsvcid": "42286" 00:10:53.502 }, 00:10:53.502 "auth": { 00:10:53.502 "state": "completed", 00:10:53.502 "digest": "sha256", 00:10:53.502 "dhgroup": "null" 00:10:53.502 } 00:10:53.502 } 00:10:53.502 ]' 00:10:53.502 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.502 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.502 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.760 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:53.760 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.760 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.760 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.760 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.018 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:10:54.018 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:10:54.585 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.843 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:54.843 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.843 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.843 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.843 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.843 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:54.843 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:55.101 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:55.360 00:10:55.360 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.360 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.360 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.618 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.618 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.618 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.618 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.618 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.618 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.618 { 00:10:55.618 "cntlid": 7, 00:10:55.618 "qid": 0, 00:10:55.618 "state": "enabled", 00:10:55.618 "thread": "nvmf_tgt_poll_group_000", 00:10:55.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:10:55.618 "listen_address": { 00:10:55.618 "trtype": "TCP", 00:10:55.618 "adrfam": "IPv4", 00:10:55.618 "traddr": "10.0.0.3", 00:10:55.618 "trsvcid": "4420" 00:10:55.618 }, 00:10:55.618 "peer_address": { 00:10:55.618 "trtype": "TCP", 00:10:55.618 "adrfam": "IPv4", 00:10:55.618 "traddr": "10.0.0.1", 00:10:55.618 "trsvcid": "42310" 00:10:55.618 }, 00:10:55.618 "auth": { 00:10:55.618 "state": "completed", 00:10:55.618 "digest": "sha256", 00:10:55.618 "dhgroup": "null" 00:10:55.619 } 00:10:55.619 } 00:10:55.619 ]' 00:10:55.619 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.619 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.878 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.878 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:55.878 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.878 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.878 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.878 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.138 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:10:56.138 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.073 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:57.638 00:10:57.638 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.638 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.638 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.894 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.894 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.894 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.894 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.894 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.894 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.894 { 00:10:57.894 "cntlid": 9, 00:10:57.894 "qid": 0, 00:10:57.894 "state": "enabled", 00:10:57.894 "thread": "nvmf_tgt_poll_group_000", 00:10:57.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:10:57.894 "listen_address": { 00:10:57.894 "trtype": "TCP", 00:10:57.894 "adrfam": "IPv4", 00:10:57.894 "traddr": "10.0.0.3", 00:10:57.894 "trsvcid": "4420" 00:10:57.894 }, 00:10:57.894 "peer_address": { 00:10:57.894 "trtype": "TCP", 00:10:57.894 "adrfam": "IPv4", 00:10:57.894 "traddr": "10.0.0.1", 00:10:57.894 "trsvcid": "42334" 00:10:57.894 }, 00:10:57.894 "auth": { 00:10:57.894 "state": "completed", 00:10:57.894 "digest": "sha256", 00:10:57.894 "dhgroup": "ffdhe2048" 00:10:57.894 } 00:10:57.894 } 00:10:57.894 ]' 00:10:57.894 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.894 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.894 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.152 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:58.152 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.152 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.152 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.152 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.411 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:10:58.411 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.347 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.914 00:10:59.914 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.914 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.914 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.173 { 00:11:00.173 "cntlid": 11, 00:11:00.173 "qid": 0, 00:11:00.173 "state": "enabled", 00:11:00.173 "thread": "nvmf_tgt_poll_group_000", 00:11:00.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:00.173 "listen_address": { 00:11:00.173 "trtype": "TCP", 00:11:00.173 "adrfam": "IPv4", 00:11:00.173 "traddr": "10.0.0.3", 00:11:00.173 "trsvcid": "4420" 00:11:00.173 }, 00:11:00.173 "peer_address": { 00:11:00.173 "trtype": "TCP", 00:11:00.173 "adrfam": "IPv4", 00:11:00.173 "traddr": "10.0.0.1", 00:11:00.173 "trsvcid": "54564" 00:11:00.173 }, 00:11:00.173 "auth": { 00:11:00.173 "state": "completed", 00:11:00.173 "digest": "sha256", 00:11:00.173 "dhgroup": "ffdhe2048" 00:11:00.173 } 00:11:00.173 } 00:11:00.173 ]' 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.173 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.431 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:00.431 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.365 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.366 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.366 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.366 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.366 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.366 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.933 00:11:01.933 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.933 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.933 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.191 { 00:11:02.191 "cntlid": 13, 00:11:02.191 "qid": 0, 00:11:02.191 "state": "enabled", 00:11:02.191 "thread": "nvmf_tgt_poll_group_000", 00:11:02.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:02.191 "listen_address": { 00:11:02.191 "trtype": "TCP", 00:11:02.191 "adrfam": "IPv4", 00:11:02.191 "traddr": "10.0.0.3", 00:11:02.191 "trsvcid": "4420" 00:11:02.191 }, 00:11:02.191 "peer_address": { 00:11:02.191 "trtype": "TCP", 00:11:02.191 "adrfam": "IPv4", 00:11:02.191 "traddr": "10.0.0.1", 00:11:02.191 "trsvcid": "54594" 00:11:02.191 }, 00:11:02.191 "auth": { 00:11:02.191 "state": "completed", 00:11:02.191 "digest": "sha256", 00:11:02.191 "dhgroup": "ffdhe2048" 00:11:02.191 } 00:11:02.191 } 00:11:02.191 ]' 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.191 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.450 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:02.450 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:03.387 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.387 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:03.387 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.387 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.387 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.387 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.387 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:03.387 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:03.645 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:03.645 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.645 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:03.645 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:03.645 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:03.645 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.645 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:11:03.645 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.645 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.646 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.646 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:03.646 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:03.646 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:03.904 00:11:03.904 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.904 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.904 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.162 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.163 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.163 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.163 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.163 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.163 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.163 { 00:11:04.163 "cntlid": 15, 00:11:04.163 "qid": 0, 00:11:04.163 "state": "enabled", 00:11:04.163 "thread": "nvmf_tgt_poll_group_000", 00:11:04.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:04.163 "listen_address": { 00:11:04.163 "trtype": "TCP", 00:11:04.163 "adrfam": "IPv4", 00:11:04.163 "traddr": "10.0.0.3", 00:11:04.163 "trsvcid": "4420" 00:11:04.163 }, 00:11:04.163 "peer_address": { 00:11:04.163 "trtype": "TCP", 00:11:04.163 "adrfam": "IPv4", 00:11:04.163 "traddr": "10.0.0.1", 00:11:04.163 "trsvcid": "54626" 00:11:04.163 }, 00:11:04.163 "auth": { 00:11:04.163 "state": "completed", 00:11:04.163 "digest": "sha256", 00:11:04.163 "dhgroup": "ffdhe2048" 00:11:04.163 } 00:11:04.163 } 00:11:04.163 ]' 00:11:04.163 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.420 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.420 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.420 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:04.420 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.420 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.420 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.420 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.678 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:04.678 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:05.244 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.244 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:05.244 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.244 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.244 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.244 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:05.244 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.244 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:05.244 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.503 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.070 00:11:06.070 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.070 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.070 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.329 { 00:11:06.329 "cntlid": 17, 00:11:06.329 "qid": 0, 00:11:06.329 "state": "enabled", 00:11:06.329 "thread": "nvmf_tgt_poll_group_000", 00:11:06.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:06.329 "listen_address": { 00:11:06.329 "trtype": "TCP", 00:11:06.329 "adrfam": "IPv4", 00:11:06.329 "traddr": "10.0.0.3", 00:11:06.329 "trsvcid": "4420" 00:11:06.329 }, 00:11:06.329 "peer_address": { 00:11:06.329 "trtype": "TCP", 00:11:06.329 "adrfam": "IPv4", 00:11:06.329 "traddr": "10.0.0.1", 00:11:06.329 "trsvcid": "54650" 00:11:06.329 }, 00:11:06.329 "auth": { 00:11:06.329 "state": "completed", 00:11:06.329 "digest": "sha256", 00:11:06.329 "dhgroup": "ffdhe3072" 00:11:06.329 } 00:11:06.329 } 00:11:06.329 ]' 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.329 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.588 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:06.588 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:07.525 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.525 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:07.525 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.525 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.525 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.525 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.525 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:07.525 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.783 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.041 00:11:08.041 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.041 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.041 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.300 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.300 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.300 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.300 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.300 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.300 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.300 { 00:11:08.300 "cntlid": 19, 00:11:08.300 "qid": 0, 00:11:08.300 "state": "enabled", 00:11:08.300 "thread": "nvmf_tgt_poll_group_000", 00:11:08.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:08.300 "listen_address": { 00:11:08.300 "trtype": "TCP", 00:11:08.300 "adrfam": "IPv4", 00:11:08.300 "traddr": "10.0.0.3", 00:11:08.300 "trsvcid": "4420" 00:11:08.300 }, 00:11:08.300 "peer_address": { 00:11:08.300 "trtype": "TCP", 00:11:08.300 "adrfam": "IPv4", 00:11:08.300 "traddr": "10.0.0.1", 00:11:08.300 "trsvcid": "54682" 00:11:08.300 }, 00:11:08.300 "auth": { 00:11:08.300 "state": "completed", 00:11:08.300 "digest": "sha256", 00:11:08.300 "dhgroup": "ffdhe3072" 00:11:08.300 } 00:11:08.300 } 00:11:08.300 ]' 00:11:08.300 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.559 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.559 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.559 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:08.559 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.559 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.559 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.559 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.819 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:08.819 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:09.799 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.799 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:09.799 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.799 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.799 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.799 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.799 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:09.799 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:09.799 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:09.799 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.799 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:09.799 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:09.799 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:09.799 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.799 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.799 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.799 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.057 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.057 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.057 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.057 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.316 00:11:10.316 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.316 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.316 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.574 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.574 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.574 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.574 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.574 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.574 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.574 { 00:11:10.574 "cntlid": 21, 00:11:10.574 "qid": 0, 00:11:10.574 "state": "enabled", 00:11:10.574 "thread": "nvmf_tgt_poll_group_000", 00:11:10.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:10.574 "listen_address": { 00:11:10.574 "trtype": "TCP", 00:11:10.574 "adrfam": "IPv4", 00:11:10.574 "traddr": "10.0.0.3", 00:11:10.574 "trsvcid": "4420" 00:11:10.574 }, 00:11:10.574 "peer_address": { 00:11:10.574 "trtype": "TCP", 00:11:10.574 "adrfam": "IPv4", 00:11:10.574 "traddr": "10.0.0.1", 00:11:10.574 "trsvcid": "37528" 00:11:10.574 }, 00:11:10.574 "auth": { 00:11:10.574 "state": "completed", 00:11:10.574 "digest": "sha256", 00:11:10.574 "dhgroup": "ffdhe3072" 00:11:10.574 } 00:11:10.574 } 00:11:10.574 ]' 00:11:10.574 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.574 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.574 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.833 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:10.833 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.833 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.833 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.833 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.092 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:11.092 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:11.659 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.659 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:11.659 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.659 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.659 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.659 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.659 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:11.659 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:12.226 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:12.226 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.226 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:12.226 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:12.226 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:12.226 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.226 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:11:12.226 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.226 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.227 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.227 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:12.227 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:12.227 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:12.485 00:11:12.485 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.485 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.485 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.743 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.743 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.743 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.743 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.743 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.743 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.743 { 00:11:12.744 "cntlid": 23, 00:11:12.744 "qid": 0, 00:11:12.744 "state": "enabled", 00:11:12.744 "thread": "nvmf_tgt_poll_group_000", 00:11:12.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:12.744 "listen_address": { 00:11:12.744 "trtype": "TCP", 00:11:12.744 "adrfam": "IPv4", 00:11:12.744 "traddr": "10.0.0.3", 00:11:12.744 "trsvcid": "4420" 00:11:12.744 }, 00:11:12.744 "peer_address": { 00:11:12.744 "trtype": "TCP", 00:11:12.744 "adrfam": "IPv4", 00:11:12.744 "traddr": "10.0.0.1", 00:11:12.744 "trsvcid": "37540" 00:11:12.744 }, 00:11:12.744 "auth": { 00:11:12.744 "state": "completed", 00:11:12.744 "digest": "sha256", 00:11:12.744 "dhgroup": "ffdhe3072" 00:11:12.744 } 00:11:12.744 } 00:11:12.744 ]' 00:11:12.744 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.744 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.744 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.744 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:12.744 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.002 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.002 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.002 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.261 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:13.262 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:13.828 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.829 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:13.829 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.829 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.829 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.829 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.829 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.829 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:13.829 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.086 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.651 00:11:14.651 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.651 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.651 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.910 { 00:11:14.910 "cntlid": 25, 00:11:14.910 "qid": 0, 00:11:14.910 "state": "enabled", 00:11:14.910 "thread": "nvmf_tgt_poll_group_000", 00:11:14.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:14.910 "listen_address": { 00:11:14.910 "trtype": "TCP", 00:11:14.910 "adrfam": "IPv4", 00:11:14.910 "traddr": "10.0.0.3", 00:11:14.910 "trsvcid": "4420" 00:11:14.910 }, 00:11:14.910 "peer_address": { 00:11:14.910 "trtype": "TCP", 00:11:14.910 "adrfam": "IPv4", 00:11:14.910 "traddr": "10.0.0.1", 00:11:14.910 "trsvcid": "37566" 00:11:14.910 }, 00:11:14.910 "auth": { 00:11:14.910 "state": "completed", 00:11:14.910 "digest": "sha256", 00:11:14.910 "dhgroup": "ffdhe4096" 00:11:14.910 } 00:11:14.910 } 00:11:14.910 ]' 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:14.910 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.168 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.168 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.168 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.427 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:15.427 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:15.995 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.995 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:15.995 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.995 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.995 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.995 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.995 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:15.995 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.253 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.820 00:11:16.820 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.820 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.820 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.078 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.078 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.078 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.078 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.078 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.078 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.078 { 00:11:17.079 "cntlid": 27, 00:11:17.079 "qid": 0, 00:11:17.079 "state": "enabled", 00:11:17.079 "thread": "nvmf_tgt_poll_group_000", 00:11:17.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:17.079 "listen_address": { 00:11:17.079 "trtype": "TCP", 00:11:17.079 "adrfam": "IPv4", 00:11:17.079 "traddr": "10.0.0.3", 00:11:17.079 "trsvcid": "4420" 00:11:17.079 }, 00:11:17.079 "peer_address": { 00:11:17.079 "trtype": "TCP", 00:11:17.079 "adrfam": "IPv4", 00:11:17.079 "traddr": "10.0.0.1", 00:11:17.079 "trsvcid": "37590" 00:11:17.079 }, 00:11:17.079 "auth": { 00:11:17.079 "state": "completed", 00:11:17.079 "digest": "sha256", 00:11:17.079 "dhgroup": "ffdhe4096" 00:11:17.079 } 00:11:17.079 } 00:11:17.079 ]' 00:11:17.079 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.079 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.079 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.079 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:17.079 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.336 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.336 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.336 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.594 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:17.594 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:18.160 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.160 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:18.160 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.160 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.160 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.161 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.161 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.161 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.728 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.986 00:11:18.986 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.986 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.986 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.245 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.245 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.245 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.245 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.245 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.245 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.245 { 00:11:19.245 "cntlid": 29, 00:11:19.245 "qid": 0, 00:11:19.245 "state": "enabled", 00:11:19.245 "thread": "nvmf_tgt_poll_group_000", 00:11:19.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:19.245 "listen_address": { 00:11:19.245 "trtype": "TCP", 00:11:19.245 "adrfam": "IPv4", 00:11:19.245 "traddr": "10.0.0.3", 00:11:19.245 "trsvcid": "4420" 00:11:19.245 }, 00:11:19.245 "peer_address": { 00:11:19.245 "trtype": "TCP", 00:11:19.245 "adrfam": "IPv4", 00:11:19.245 "traddr": "10.0.0.1", 00:11:19.245 "trsvcid": "37606" 00:11:19.245 }, 00:11:19.245 "auth": { 00:11:19.245 "state": "completed", 00:11:19.245 "digest": "sha256", 00:11:19.245 "dhgroup": "ffdhe4096" 00:11:19.245 } 00:11:19.245 } 00:11:19.245 ]' 00:11:19.245 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.503 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.503 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.503 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:19.503 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.503 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.503 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.503 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.762 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:19.762 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:20.697 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.697 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:20.697 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.697 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.697 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.697 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.697 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:20.697 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:20.956 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:21.215 00:11:21.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.215 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.473 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.473 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.473 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.473 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.473 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.473 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.473 { 00:11:21.473 "cntlid": 31, 00:11:21.473 "qid": 0, 00:11:21.473 "state": "enabled", 00:11:21.473 "thread": "nvmf_tgt_poll_group_000", 00:11:21.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:21.473 "listen_address": { 00:11:21.473 "trtype": "TCP", 00:11:21.473 "adrfam": "IPv4", 00:11:21.473 "traddr": "10.0.0.3", 00:11:21.473 "trsvcid": "4420" 00:11:21.473 }, 00:11:21.473 "peer_address": { 00:11:21.473 "trtype": "TCP", 00:11:21.473 "adrfam": "IPv4", 00:11:21.473 "traddr": "10.0.0.1", 00:11:21.473 "trsvcid": "49326" 00:11:21.473 }, 00:11:21.473 "auth": { 00:11:21.473 "state": "completed", 00:11:21.473 "digest": "sha256", 00:11:21.473 "dhgroup": "ffdhe4096" 00:11:21.473 } 00:11:21.473 } 00:11:21.473 ]' 00:11:21.473 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.732 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.732 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.732 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:21.732 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.732 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.732 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.732 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.990 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:21.990 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:22.926 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.926 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:22.926 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.926 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.926 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.926 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:22.926 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.926 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:22.926 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.185 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.752 00:11:23.752 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.752 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.752 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.011 { 00:11:24.011 "cntlid": 33, 00:11:24.011 "qid": 0, 00:11:24.011 "state": "enabled", 00:11:24.011 "thread": "nvmf_tgt_poll_group_000", 00:11:24.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:24.011 "listen_address": { 00:11:24.011 "trtype": "TCP", 00:11:24.011 "adrfam": "IPv4", 00:11:24.011 "traddr": "10.0.0.3", 00:11:24.011 "trsvcid": "4420" 00:11:24.011 }, 00:11:24.011 "peer_address": { 00:11:24.011 "trtype": "TCP", 00:11:24.011 "adrfam": "IPv4", 00:11:24.011 "traddr": "10.0.0.1", 00:11:24.011 "trsvcid": "49364" 00:11:24.011 }, 00:11:24.011 "auth": { 00:11:24.011 "state": "completed", 00:11:24.011 "digest": "sha256", 00:11:24.011 "dhgroup": "ffdhe6144" 00:11:24.011 } 00:11:24.011 } 00:11:24.011 ]' 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.011 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.583 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:24.583 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:25.149 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.149 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:25.149 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.149 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.149 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.149 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.149 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:25.149 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.407 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.974 00:11:25.974 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.974 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.974 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.232 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.232 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.232 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.232 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.232 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.232 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.232 { 00:11:26.232 "cntlid": 35, 00:11:26.232 "qid": 0, 00:11:26.232 "state": "enabled", 00:11:26.232 "thread": "nvmf_tgt_poll_group_000", 00:11:26.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:26.232 "listen_address": { 00:11:26.232 "trtype": "TCP", 00:11:26.232 "adrfam": "IPv4", 00:11:26.232 "traddr": "10.0.0.3", 00:11:26.232 "trsvcid": "4420" 00:11:26.232 }, 00:11:26.232 "peer_address": { 00:11:26.232 "trtype": "TCP", 00:11:26.232 "adrfam": "IPv4", 00:11:26.232 "traddr": "10.0.0.1", 00:11:26.232 "trsvcid": "49392" 00:11:26.232 }, 00:11:26.232 "auth": { 00:11:26.232 "state": "completed", 00:11:26.232 "digest": "sha256", 00:11:26.232 "dhgroup": "ffdhe6144" 00:11:26.232 } 00:11:26.232 } 00:11:26.232 ]' 00:11:26.232 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.491 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.491 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.491 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:26.491 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.491 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.491 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.491 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.749 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:26.749 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:27.684 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.684 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:27.684 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.684 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.684 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.684 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.684 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:27.684 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.943 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.513 00:11:28.513 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.513 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.513 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.772 { 00:11:28.772 "cntlid": 37, 00:11:28.772 "qid": 0, 00:11:28.772 "state": "enabled", 00:11:28.772 "thread": "nvmf_tgt_poll_group_000", 00:11:28.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:28.772 "listen_address": { 00:11:28.772 "trtype": "TCP", 00:11:28.772 "adrfam": "IPv4", 00:11:28.772 "traddr": "10.0.0.3", 00:11:28.772 "trsvcid": "4420" 00:11:28.772 }, 00:11:28.772 "peer_address": { 00:11:28.772 "trtype": "TCP", 00:11:28.772 "adrfam": "IPv4", 00:11:28.772 "traddr": "10.0.0.1", 00:11:28.772 "trsvcid": "49424" 00:11:28.772 }, 00:11:28.772 "auth": { 00:11:28.772 "state": "completed", 00:11:28.772 "digest": "sha256", 00:11:28.772 "dhgroup": "ffdhe6144" 00:11:28.772 } 00:11:28.772 } 00:11:28.772 ]' 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.772 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.031 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:29.031 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:29.966 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.966 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:29.966 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.966 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.966 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.966 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.966 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:29.966 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:30.224 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:30.789 00:11:30.789 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.789 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.789 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.046 { 00:11:31.046 "cntlid": 39, 00:11:31.046 "qid": 0, 00:11:31.046 "state": "enabled", 00:11:31.046 "thread": "nvmf_tgt_poll_group_000", 00:11:31.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:31.046 "listen_address": { 00:11:31.046 "trtype": "TCP", 00:11:31.046 "adrfam": "IPv4", 00:11:31.046 "traddr": "10.0.0.3", 00:11:31.046 "trsvcid": "4420" 00:11:31.046 }, 00:11:31.046 "peer_address": { 00:11:31.046 "trtype": "TCP", 00:11:31.046 "adrfam": "IPv4", 00:11:31.046 "traddr": "10.0.0.1", 00:11:31.046 "trsvcid": "41376" 00:11:31.046 }, 00:11:31.046 "auth": { 00:11:31.046 "state": "completed", 00:11:31.046 "digest": "sha256", 00:11:31.046 "dhgroup": "ffdhe6144" 00:11:31.046 } 00:11:31.046 } 00:11:31.046 ]' 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.046 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.047 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.047 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.304 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:31.304 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:32.237 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.237 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:32.237 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.237 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.237 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.237 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:32.237 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.237 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:32.237 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:32.497 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.498 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.063 00:11:33.063 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.063 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.063 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.320 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.320 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.320 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.320 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.320 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.320 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.320 { 00:11:33.320 "cntlid": 41, 00:11:33.320 "qid": 0, 00:11:33.320 "state": "enabled", 00:11:33.320 "thread": "nvmf_tgt_poll_group_000", 00:11:33.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:33.320 "listen_address": { 00:11:33.320 "trtype": "TCP", 00:11:33.320 "adrfam": "IPv4", 00:11:33.320 "traddr": "10.0.0.3", 00:11:33.320 "trsvcid": "4420" 00:11:33.320 }, 00:11:33.320 "peer_address": { 00:11:33.320 "trtype": "TCP", 00:11:33.320 "adrfam": "IPv4", 00:11:33.320 "traddr": "10.0.0.1", 00:11:33.320 "trsvcid": "41404" 00:11:33.320 }, 00:11:33.320 "auth": { 00:11:33.320 "state": "completed", 00:11:33.320 "digest": "sha256", 00:11:33.320 "dhgroup": "ffdhe8192" 00:11:33.320 } 00:11:33.320 } 00:11:33.320 ]' 00:11:33.320 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.320 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.321 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.321 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:33.321 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.578 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.578 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.578 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.836 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:33.836 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:34.401 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.401 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:34.401 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.401 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.401 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.401 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.401 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:34.401 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.659 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.593 00:11:35.593 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.593 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.593 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.852 { 00:11:35.852 "cntlid": 43, 00:11:35.852 "qid": 0, 00:11:35.852 "state": "enabled", 00:11:35.852 "thread": "nvmf_tgt_poll_group_000", 00:11:35.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:35.852 "listen_address": { 00:11:35.852 "trtype": "TCP", 00:11:35.852 "adrfam": "IPv4", 00:11:35.852 "traddr": "10.0.0.3", 00:11:35.852 "trsvcid": "4420" 00:11:35.852 }, 00:11:35.852 "peer_address": { 00:11:35.852 "trtype": "TCP", 00:11:35.852 "adrfam": "IPv4", 00:11:35.852 "traddr": "10.0.0.1", 00:11:35.852 "trsvcid": "41422" 00:11:35.852 }, 00:11:35.852 "auth": { 00:11:35.852 "state": "completed", 00:11:35.852 "digest": "sha256", 00:11:35.852 "dhgroup": "ffdhe8192" 00:11:35.852 } 00:11:35.852 } 00:11:35.852 ]' 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.852 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.110 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:36.110 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:37.045 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.046 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.980 00:11:37.980 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.980 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.980 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.980 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.980 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.980 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.980 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.980 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.980 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.980 { 00:11:37.980 "cntlid": 45, 00:11:37.980 "qid": 0, 00:11:37.980 "state": "enabled", 00:11:37.980 "thread": "nvmf_tgt_poll_group_000", 00:11:37.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:37.980 "listen_address": { 00:11:37.980 "trtype": "TCP", 00:11:37.980 "adrfam": "IPv4", 00:11:37.980 "traddr": "10.0.0.3", 00:11:37.980 "trsvcid": "4420" 00:11:37.980 }, 00:11:37.980 "peer_address": { 00:11:37.980 "trtype": "TCP", 00:11:37.980 "adrfam": "IPv4", 00:11:37.980 "traddr": "10.0.0.1", 00:11:37.980 "trsvcid": "41452" 00:11:37.980 }, 00:11:37.980 "auth": { 00:11:37.980 "state": "completed", 00:11:37.980 "digest": "sha256", 00:11:37.980 "dhgroup": "ffdhe8192" 00:11:37.980 } 00:11:37.980 } 00:11:37.980 ]' 00:11:37.980 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.237 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.237 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.237 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:38.237 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.237 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.237 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.237 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.495 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:38.495 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.429 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:39.430 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:39.430 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:39.997 00:11:39.997 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.997 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.997 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.564 { 00:11:40.564 "cntlid": 47, 00:11:40.564 "qid": 0, 00:11:40.564 "state": "enabled", 00:11:40.564 "thread": "nvmf_tgt_poll_group_000", 00:11:40.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:40.564 "listen_address": { 00:11:40.564 "trtype": "TCP", 00:11:40.564 "adrfam": "IPv4", 00:11:40.564 "traddr": "10.0.0.3", 00:11:40.564 "trsvcid": "4420" 00:11:40.564 }, 00:11:40.564 "peer_address": { 00:11:40.564 "trtype": "TCP", 00:11:40.564 "adrfam": "IPv4", 00:11:40.564 "traddr": "10.0.0.1", 00:11:40.564 "trsvcid": "55294" 00:11:40.564 }, 00:11:40.564 "auth": { 00:11:40.564 "state": "completed", 00:11:40.564 "digest": "sha256", 00:11:40.564 "dhgroup": "ffdhe8192" 00:11:40.564 } 00:11:40.564 } 00:11:40.564 ]' 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.564 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.565 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.565 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.823 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:40.823 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:41.761 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.761 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:41.761 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.761 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.761 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.761 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:41.761 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:41.761 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.761 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:41.761 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.761 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.332 00:11:42.332 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.332 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.332 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.599 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.599 { 00:11:42.599 "cntlid": 49, 00:11:42.599 "qid": 0, 00:11:42.599 "state": "enabled", 00:11:42.599 "thread": "nvmf_tgt_poll_group_000", 00:11:42.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:42.600 "listen_address": { 00:11:42.600 "trtype": "TCP", 00:11:42.600 "adrfam": "IPv4", 00:11:42.600 "traddr": "10.0.0.3", 00:11:42.600 "trsvcid": "4420" 00:11:42.600 }, 00:11:42.600 "peer_address": { 00:11:42.600 "trtype": "TCP", 00:11:42.600 "adrfam": "IPv4", 00:11:42.600 "traddr": "10.0.0.1", 00:11:42.600 "trsvcid": "55326" 00:11:42.600 }, 00:11:42.600 "auth": { 00:11:42.600 "state": "completed", 00:11:42.600 "digest": "sha384", 00:11:42.600 "dhgroup": "null" 00:11:42.600 } 00:11:42.600 } 00:11:42.600 ]' 00:11:42.600 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.600 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.600 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.600 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:42.600 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.600 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.600 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.600 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.859 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:42.859 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:43.794 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.794 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:43.794 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.794 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.794 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.794 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.794 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:43.794 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.053 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.318 00:11:44.318 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.318 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.318 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.582 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.582 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.582 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.582 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.582 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.582 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.582 { 00:11:44.582 "cntlid": 51, 00:11:44.582 "qid": 0, 00:11:44.582 "state": "enabled", 00:11:44.582 "thread": "nvmf_tgt_poll_group_000", 00:11:44.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:44.582 "listen_address": { 00:11:44.582 "trtype": "TCP", 00:11:44.582 "adrfam": "IPv4", 00:11:44.582 "traddr": "10.0.0.3", 00:11:44.582 "trsvcid": "4420" 00:11:44.582 }, 00:11:44.582 "peer_address": { 00:11:44.582 "trtype": "TCP", 00:11:44.582 "adrfam": "IPv4", 00:11:44.582 "traddr": "10.0.0.1", 00:11:44.582 "trsvcid": "55354" 00:11:44.582 }, 00:11:44.582 "auth": { 00:11:44.582 "state": "completed", 00:11:44.582 "digest": "sha384", 00:11:44.582 "dhgroup": "null" 00:11:44.582 } 00:11:44.582 } 00:11:44.582 ]' 00:11:44.582 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.582 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.582 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.840 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:44.840 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.840 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.840 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.840 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.098 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:45.098 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:45.666 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.666 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:45.666 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.666 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.666 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.666 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.666 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:45.666 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:46.234 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:46.234 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.234 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:46.234 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:46.234 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:46.234 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.235 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.235 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.235 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.235 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.235 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.235 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.235 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.494 00:11:46.494 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.494 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.494 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.753 { 00:11:46.753 "cntlid": 53, 00:11:46.753 "qid": 0, 00:11:46.753 "state": "enabled", 00:11:46.753 "thread": "nvmf_tgt_poll_group_000", 00:11:46.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:46.753 "listen_address": { 00:11:46.753 "trtype": "TCP", 00:11:46.753 "adrfam": "IPv4", 00:11:46.753 "traddr": "10.0.0.3", 00:11:46.753 "trsvcid": "4420" 00:11:46.753 }, 00:11:46.753 "peer_address": { 00:11:46.753 "trtype": "TCP", 00:11:46.753 "adrfam": "IPv4", 00:11:46.753 "traddr": "10.0.0.1", 00:11:46.753 "trsvcid": "55370" 00:11:46.753 }, 00:11:46.753 "auth": { 00:11:46.753 "state": "completed", 00:11:46.753 "digest": "sha384", 00:11:46.753 "dhgroup": "null" 00:11:46.753 } 00:11:46.753 } 00:11:46.753 ]' 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.753 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.322 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:47.322 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:47.889 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.889 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:47.889 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.889 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.889 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.889 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.889 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:47.889 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.148 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.428 00:11:48.428 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.428 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.428 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.686 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.686 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.686 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.686 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.686 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.686 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.686 { 00:11:48.686 "cntlid": 55, 00:11:48.686 "qid": 0, 00:11:48.686 "state": "enabled", 00:11:48.686 "thread": "nvmf_tgt_poll_group_000", 00:11:48.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:48.686 "listen_address": { 00:11:48.686 "trtype": "TCP", 00:11:48.686 "adrfam": "IPv4", 00:11:48.686 "traddr": "10.0.0.3", 00:11:48.686 "trsvcid": "4420" 00:11:48.686 }, 00:11:48.686 "peer_address": { 00:11:48.686 "trtype": "TCP", 00:11:48.686 "adrfam": "IPv4", 00:11:48.686 "traddr": "10.0.0.1", 00:11:48.686 "trsvcid": "55400" 00:11:48.686 }, 00:11:48.686 "auth": { 00:11:48.686 "state": "completed", 00:11:48.686 "digest": "sha384", 00:11:48.686 "dhgroup": "null" 00:11:48.686 } 00:11:48.686 } 00:11:48.686 ]' 00:11:48.686 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.686 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.686 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.944 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:48.944 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.944 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.944 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.944 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.202 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:49.202 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:49.768 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.768 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:49.768 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.768 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.768 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.768 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:49.768 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.768 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:49.768 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.030 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.287 00:11:50.287 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.287 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.287 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.544 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.544 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.544 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.544 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.801 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.801 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.801 { 00:11:50.801 "cntlid": 57, 00:11:50.801 "qid": 0, 00:11:50.801 "state": "enabled", 00:11:50.801 "thread": "nvmf_tgt_poll_group_000", 00:11:50.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:50.801 "listen_address": { 00:11:50.801 "trtype": "TCP", 00:11:50.801 "adrfam": "IPv4", 00:11:50.801 "traddr": "10.0.0.3", 00:11:50.801 "trsvcid": "4420" 00:11:50.801 }, 00:11:50.801 "peer_address": { 00:11:50.801 "trtype": "TCP", 00:11:50.801 "adrfam": "IPv4", 00:11:50.801 "traddr": "10.0.0.1", 00:11:50.801 "trsvcid": "45858" 00:11:50.801 }, 00:11:50.801 "auth": { 00:11:50.801 "state": "completed", 00:11:50.801 "digest": "sha384", 00:11:50.801 "dhgroup": "ffdhe2048" 00:11:50.801 } 00:11:50.801 } 00:11:50.801 ]' 00:11:50.801 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.801 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:50.801 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.801 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.801 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.801 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.801 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.801 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.059 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:51.059 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:51.625 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.625 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:51.625 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.626 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.626 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.626 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.626 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:51.626 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.885 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.452 00:11:52.452 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.452 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.452 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.711 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.711 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.711 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.711 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.711 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.711 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.711 { 00:11:52.711 "cntlid": 59, 00:11:52.711 "qid": 0, 00:11:52.711 "state": "enabled", 00:11:52.711 "thread": "nvmf_tgt_poll_group_000", 00:11:52.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:52.711 "listen_address": { 00:11:52.711 "trtype": "TCP", 00:11:52.711 "adrfam": "IPv4", 00:11:52.711 "traddr": "10.0.0.3", 00:11:52.711 "trsvcid": "4420" 00:11:52.711 }, 00:11:52.711 "peer_address": { 00:11:52.711 "trtype": "TCP", 00:11:52.711 "adrfam": "IPv4", 00:11:52.711 "traddr": "10.0.0.1", 00:11:52.711 "trsvcid": "45874" 00:11:52.711 }, 00:11:52.711 "auth": { 00:11:52.711 "state": "completed", 00:11:52.711 "digest": "sha384", 00:11:52.711 "dhgroup": "ffdhe2048" 00:11:52.711 } 00:11:52.711 } 00:11:52.711 ]' 00:11:52.711 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.711 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:52.712 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.712 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:52.712 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.712 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.712 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.712 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.971 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:52.971 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.909 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.910 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.910 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.910 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.168 00:11:54.427 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.427 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.427 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.686 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.686 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.686 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.686 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.686 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.686 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.686 { 00:11:54.686 "cntlid": 61, 00:11:54.686 "qid": 0, 00:11:54.686 "state": "enabled", 00:11:54.686 "thread": "nvmf_tgt_poll_group_000", 00:11:54.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:54.686 "listen_address": { 00:11:54.686 "trtype": "TCP", 00:11:54.686 "adrfam": "IPv4", 00:11:54.686 "traddr": "10.0.0.3", 00:11:54.686 "trsvcid": "4420" 00:11:54.686 }, 00:11:54.686 "peer_address": { 00:11:54.686 "trtype": "TCP", 00:11:54.686 "adrfam": "IPv4", 00:11:54.686 "traddr": "10.0.0.1", 00:11:54.686 "trsvcid": "45920" 00:11:54.686 }, 00:11:54.686 "auth": { 00:11:54.686 "state": "completed", 00:11:54.686 "digest": "sha384", 00:11:54.686 "dhgroup": "ffdhe2048" 00:11:54.686 } 00:11:54.686 } 00:11:54.686 ]' 00:11:54.686 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.686 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:54.686 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.686 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:54.686 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.686 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.686 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.686 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.946 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:54.946 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:11:55.884 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.884 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:55.884 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.884 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.884 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.884 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.884 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:55.884 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.144 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.403 00:11:56.403 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.403 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.403 11:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.662 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.662 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.662 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.662 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.662 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.662 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.662 { 00:11:56.662 "cntlid": 63, 00:11:56.662 "qid": 0, 00:11:56.662 "state": "enabled", 00:11:56.662 "thread": "nvmf_tgt_poll_group_000", 00:11:56.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:56.662 "listen_address": { 00:11:56.662 "trtype": "TCP", 00:11:56.662 "adrfam": "IPv4", 00:11:56.662 "traddr": "10.0.0.3", 00:11:56.662 "trsvcid": "4420" 00:11:56.662 }, 00:11:56.662 "peer_address": { 00:11:56.662 "trtype": "TCP", 00:11:56.662 "adrfam": "IPv4", 00:11:56.662 "traddr": "10.0.0.1", 00:11:56.662 "trsvcid": "45950" 00:11:56.662 }, 00:11:56.662 "auth": { 00:11:56.662 "state": "completed", 00:11:56.662 "digest": "sha384", 00:11:56.662 "dhgroup": "ffdhe2048" 00:11:56.662 } 00:11:56.662 } 00:11:56.662 ]' 00:11:56.662 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.662 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:56.662 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.922 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:56.922 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.922 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.922 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.922 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.182 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:57.182 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:11:57.750 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.750 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:57.750 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.750 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.750 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.750 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.750 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.751 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:57.751 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.010 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.577 00:11:58.577 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.577 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.577 11:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.837 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.837 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.837 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.837 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.837 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.837 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.837 { 00:11:58.837 "cntlid": 65, 00:11:58.837 "qid": 0, 00:11:58.837 "state": "enabled", 00:11:58.837 "thread": "nvmf_tgt_poll_group_000", 00:11:58.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:11:58.837 "listen_address": { 00:11:58.837 "trtype": "TCP", 00:11:58.837 "adrfam": "IPv4", 00:11:58.837 "traddr": "10.0.0.3", 00:11:58.837 "trsvcid": "4420" 00:11:58.837 }, 00:11:58.837 "peer_address": { 00:11:58.837 "trtype": "TCP", 00:11:58.837 "adrfam": "IPv4", 00:11:58.837 "traddr": "10.0.0.1", 00:11:58.837 "trsvcid": "45976" 00:11:58.837 }, 00:11:58.837 "auth": { 00:11:58.838 "state": "completed", 00:11:58.838 "digest": "sha384", 00:11:58.838 "dhgroup": "ffdhe3072" 00:11:58.838 } 00:11:58.838 } 00:11:58.838 ]' 00:11:58.838 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.838 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.838 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.838 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:58.838 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.838 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.838 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.838 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.097 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:59.097 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:11:59.664 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.924 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:11:59.924 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.924 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.924 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.924 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.924 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:59.924 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.183 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.443 00:12:00.443 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.443 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.443 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.702 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.702 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.702 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.702 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.702 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.702 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.702 { 00:12:00.702 "cntlid": 67, 00:12:00.702 "qid": 0, 00:12:00.702 "state": "enabled", 00:12:00.702 "thread": "nvmf_tgt_poll_group_000", 00:12:00.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:00.702 "listen_address": { 00:12:00.702 "trtype": "TCP", 00:12:00.702 "adrfam": "IPv4", 00:12:00.702 "traddr": "10.0.0.3", 00:12:00.702 "trsvcid": "4420" 00:12:00.702 }, 00:12:00.702 "peer_address": { 00:12:00.702 "trtype": "TCP", 00:12:00.702 "adrfam": "IPv4", 00:12:00.702 "traddr": "10.0.0.1", 00:12:00.702 "trsvcid": "38406" 00:12:00.702 }, 00:12:00.702 "auth": { 00:12:00.702 "state": "completed", 00:12:00.702 "digest": "sha384", 00:12:00.702 "dhgroup": "ffdhe3072" 00:12:00.702 } 00:12:00.702 } 00:12:00.702 ]' 00:12:00.702 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.962 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.962 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.962 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:00.962 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.962 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.962 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.962 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.222 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:01.222 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:01.789 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.789 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:01.789 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.789 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.048 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.048 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.048 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:02.048 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.308 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.567 00:12:02.567 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.567 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.567 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.827 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.827 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.827 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.827 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.827 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.827 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.827 { 00:12:02.827 "cntlid": 69, 00:12:02.827 "qid": 0, 00:12:02.827 "state": "enabled", 00:12:02.827 "thread": "nvmf_tgt_poll_group_000", 00:12:02.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:02.827 "listen_address": { 00:12:02.827 "trtype": "TCP", 00:12:02.827 "adrfam": "IPv4", 00:12:02.827 "traddr": "10.0.0.3", 00:12:02.827 "trsvcid": "4420" 00:12:02.827 }, 00:12:02.827 "peer_address": { 00:12:02.827 "trtype": "TCP", 00:12:02.827 "adrfam": "IPv4", 00:12:02.827 "traddr": "10.0.0.1", 00:12:02.827 "trsvcid": "38446" 00:12:02.827 }, 00:12:02.827 "auth": { 00:12:02.827 "state": "completed", 00:12:02.827 "digest": "sha384", 00:12:02.827 "dhgroup": "ffdhe3072" 00:12:02.827 } 00:12:02.827 } 00:12:02.827 ]' 00:12:02.827 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.087 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.087 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.087 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:03.087 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.087 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.087 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.087 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.350 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:03.350 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:03.930 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.930 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:03.930 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.930 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.930 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.930 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.930 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:03.930 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.219 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.787 00:12:04.787 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.787 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.787 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.046 { 00:12:05.046 "cntlid": 71, 00:12:05.046 "qid": 0, 00:12:05.046 "state": "enabled", 00:12:05.046 "thread": "nvmf_tgt_poll_group_000", 00:12:05.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:05.046 "listen_address": { 00:12:05.046 "trtype": "TCP", 00:12:05.046 "adrfam": "IPv4", 00:12:05.046 "traddr": "10.0.0.3", 00:12:05.046 "trsvcid": "4420" 00:12:05.046 }, 00:12:05.046 "peer_address": { 00:12:05.046 "trtype": "TCP", 00:12:05.046 "adrfam": "IPv4", 00:12:05.046 "traddr": "10.0.0.1", 00:12:05.046 "trsvcid": "38478" 00:12:05.046 }, 00:12:05.046 "auth": { 00:12:05.046 "state": "completed", 00:12:05.046 "digest": "sha384", 00:12:05.046 "dhgroup": "ffdhe3072" 00:12:05.046 } 00:12:05.046 } 00:12:05.046 ]' 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.046 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.305 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:05.305 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:05.872 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.131 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:06.131 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.131 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.131 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.131 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:06.131 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.131 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:06.131 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.390 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.649 00:12:06.649 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.649 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.649 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.908 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.908 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.908 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.908 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.908 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.908 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.908 { 00:12:06.908 "cntlid": 73, 00:12:06.908 "qid": 0, 00:12:06.908 "state": "enabled", 00:12:06.908 "thread": "nvmf_tgt_poll_group_000", 00:12:06.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:06.908 "listen_address": { 00:12:06.908 "trtype": "TCP", 00:12:06.908 "adrfam": "IPv4", 00:12:06.908 "traddr": "10.0.0.3", 00:12:06.908 "trsvcid": "4420" 00:12:06.908 }, 00:12:06.908 "peer_address": { 00:12:06.908 "trtype": "TCP", 00:12:06.908 "adrfam": "IPv4", 00:12:06.908 "traddr": "10.0.0.1", 00:12:06.908 "trsvcid": "38488" 00:12:06.908 }, 00:12:06.908 "auth": { 00:12:06.908 "state": "completed", 00:12:06.908 "digest": "sha384", 00:12:06.908 "dhgroup": "ffdhe4096" 00:12:06.908 } 00:12:06.908 } 00:12:06.908 ]' 00:12:06.908 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.167 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.167 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.167 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:07.167 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.167 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.167 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.167 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.426 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:07.426 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:07.994 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.994 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:07.994 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.994 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.994 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.994 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.994 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:07.994 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.253 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.512 00:12:08.770 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.770 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.770 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.030 { 00:12:09.030 "cntlid": 75, 00:12:09.030 "qid": 0, 00:12:09.030 "state": "enabled", 00:12:09.030 "thread": "nvmf_tgt_poll_group_000", 00:12:09.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:09.030 "listen_address": { 00:12:09.030 "trtype": "TCP", 00:12:09.030 "adrfam": "IPv4", 00:12:09.030 "traddr": "10.0.0.3", 00:12:09.030 "trsvcid": "4420" 00:12:09.030 }, 00:12:09.030 "peer_address": { 00:12:09.030 "trtype": "TCP", 00:12:09.030 "adrfam": "IPv4", 00:12:09.030 "traddr": "10.0.0.1", 00:12:09.030 "trsvcid": "38520" 00:12:09.030 }, 00:12:09.030 "auth": { 00:12:09.030 "state": "completed", 00:12:09.030 "digest": "sha384", 00:12:09.030 "dhgroup": "ffdhe4096" 00:12:09.030 } 00:12:09.030 } 00:12:09.030 ]' 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.030 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.289 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:09.289 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.234 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.802 00:12:10.802 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.802 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.802 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.063 { 00:12:11.063 "cntlid": 77, 00:12:11.063 "qid": 0, 00:12:11.063 "state": "enabled", 00:12:11.063 "thread": "nvmf_tgt_poll_group_000", 00:12:11.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:11.063 "listen_address": { 00:12:11.063 "trtype": "TCP", 00:12:11.063 "adrfam": "IPv4", 00:12:11.063 "traddr": "10.0.0.3", 00:12:11.063 "trsvcid": "4420" 00:12:11.063 }, 00:12:11.063 "peer_address": { 00:12:11.063 "trtype": "TCP", 00:12:11.063 "adrfam": "IPv4", 00:12:11.063 "traddr": "10.0.0.1", 00:12:11.063 "trsvcid": "54744" 00:12:11.063 }, 00:12:11.063 "auth": { 00:12:11.063 "state": "completed", 00:12:11.063 "digest": "sha384", 00:12:11.063 "dhgroup": "ffdhe4096" 00:12:11.063 } 00:12:11.063 } 00:12:11.063 ]' 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.063 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.322 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:11.322 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.258 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.259 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:12.259 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:12.259 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:12.827 00:12:12.827 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.827 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.827 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.827 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.827 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.827 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.827 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.827 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.827 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.827 { 00:12:12.827 "cntlid": 79, 00:12:12.827 "qid": 0, 00:12:12.827 "state": "enabled", 00:12:12.827 "thread": "nvmf_tgt_poll_group_000", 00:12:12.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:12.827 "listen_address": { 00:12:12.827 "trtype": "TCP", 00:12:12.827 "adrfam": "IPv4", 00:12:12.827 "traddr": "10.0.0.3", 00:12:12.827 "trsvcid": "4420" 00:12:12.827 }, 00:12:12.827 "peer_address": { 00:12:12.827 "trtype": "TCP", 00:12:12.827 "adrfam": "IPv4", 00:12:12.827 "traddr": "10.0.0.1", 00:12:12.827 "trsvcid": "54786" 00:12:12.827 }, 00:12:12.827 "auth": { 00:12:12.827 "state": "completed", 00:12:12.827 "digest": "sha384", 00:12:12.827 "dhgroup": "ffdhe4096" 00:12:12.827 } 00:12:12.827 } 00:12:12.827 ]' 00:12:12.827 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.086 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.086 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.086 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:13.086 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.086 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.086 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.086 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.345 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:13.345 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:13.914 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.914 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:13.914 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.914 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.914 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.914 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.914 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.914 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:13.914 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.482 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.740 00:12:14.740 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.740 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.740 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.999 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.999 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.999 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.999 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.259 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.259 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.259 { 00:12:15.259 "cntlid": 81, 00:12:15.259 "qid": 0, 00:12:15.259 "state": "enabled", 00:12:15.259 "thread": "nvmf_tgt_poll_group_000", 00:12:15.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:15.259 "listen_address": { 00:12:15.259 "trtype": "TCP", 00:12:15.259 "adrfam": "IPv4", 00:12:15.259 "traddr": "10.0.0.3", 00:12:15.259 "trsvcid": "4420" 00:12:15.259 }, 00:12:15.259 "peer_address": { 00:12:15.259 "trtype": "TCP", 00:12:15.259 "adrfam": "IPv4", 00:12:15.259 "traddr": "10.0.0.1", 00:12:15.259 "trsvcid": "54816" 00:12:15.259 }, 00:12:15.259 "auth": { 00:12:15.259 "state": "completed", 00:12:15.259 "digest": "sha384", 00:12:15.259 "dhgroup": "ffdhe6144" 00:12:15.259 } 00:12:15.259 } 00:12:15.259 ]' 00:12:15.259 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.259 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.259 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.259 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:15.259 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.259 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.259 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.259 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.518 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:15.518 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.454 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.712 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.712 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.712 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.712 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.971 00:12:16.971 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.971 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.971 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.539 { 00:12:17.539 "cntlid": 83, 00:12:17.539 "qid": 0, 00:12:17.539 "state": "enabled", 00:12:17.539 "thread": "nvmf_tgt_poll_group_000", 00:12:17.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:17.539 "listen_address": { 00:12:17.539 "trtype": "TCP", 00:12:17.539 "adrfam": "IPv4", 00:12:17.539 "traddr": "10.0.0.3", 00:12:17.539 "trsvcid": "4420" 00:12:17.539 }, 00:12:17.539 "peer_address": { 00:12:17.539 "trtype": "TCP", 00:12:17.539 "adrfam": "IPv4", 00:12:17.539 "traddr": "10.0.0.1", 00:12:17.539 "trsvcid": "54852" 00:12:17.539 }, 00:12:17.539 "auth": { 00:12:17.539 "state": "completed", 00:12:17.539 "digest": "sha384", 00:12:17.539 "dhgroup": "ffdhe6144" 00:12:17.539 } 00:12:17.539 } 00:12:17.539 ]' 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.539 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.798 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:17.798 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:18.367 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.367 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:18.367 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.367 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.367 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.367 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.367 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:18.367 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.626 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.195 00:12:19.195 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.195 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.195 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.453 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.453 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.453 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.453 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.453 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.453 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.453 { 00:12:19.453 "cntlid": 85, 00:12:19.453 "qid": 0, 00:12:19.453 "state": "enabled", 00:12:19.453 "thread": "nvmf_tgt_poll_group_000", 00:12:19.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:19.453 "listen_address": { 00:12:19.453 "trtype": "TCP", 00:12:19.453 "adrfam": "IPv4", 00:12:19.453 "traddr": "10.0.0.3", 00:12:19.453 "trsvcid": "4420" 00:12:19.453 }, 00:12:19.453 "peer_address": { 00:12:19.453 "trtype": "TCP", 00:12:19.453 "adrfam": "IPv4", 00:12:19.453 "traddr": "10.0.0.1", 00:12:19.453 "trsvcid": "54866" 00:12:19.453 }, 00:12:19.453 "auth": { 00:12:19.453 "state": "completed", 00:12:19.453 "digest": "sha384", 00:12:19.453 "dhgroup": "ffdhe6144" 00:12:19.453 } 00:12:19.453 } 00:12:19.453 ]' 00:12:19.453 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.453 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.453 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.712 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:19.712 11:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.712 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.712 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.712 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.971 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:19.971 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:20.538 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.538 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:20.538 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.538 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.538 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.538 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.538 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.538 11:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.797 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.365 00:12:21.365 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.365 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.365 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.624 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.624 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.624 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.624 11:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.624 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.624 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.624 { 00:12:21.624 "cntlid": 87, 00:12:21.624 "qid": 0, 00:12:21.624 "state": "enabled", 00:12:21.624 "thread": "nvmf_tgt_poll_group_000", 00:12:21.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:21.624 "listen_address": { 00:12:21.624 "trtype": "TCP", 00:12:21.624 "adrfam": "IPv4", 00:12:21.624 "traddr": "10.0.0.3", 00:12:21.624 "trsvcid": "4420" 00:12:21.624 }, 00:12:21.624 "peer_address": { 00:12:21.624 "trtype": "TCP", 00:12:21.624 "adrfam": "IPv4", 00:12:21.624 "traddr": "10.0.0.1", 00:12:21.624 "trsvcid": "35226" 00:12:21.624 }, 00:12:21.624 "auth": { 00:12:21.624 "state": "completed", 00:12:21.624 "digest": "sha384", 00:12:21.624 "dhgroup": "ffdhe6144" 00:12:21.624 } 00:12:21.624 } 00:12:21.624 ]' 00:12:21.624 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.624 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.624 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.624 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:21.624 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.884 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.884 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.884 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.143 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:22.143 11:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:22.711 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.711 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:22.711 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.711 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.711 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.711 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:22.711 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.711 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:22.711 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.969 11:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.903 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.903 { 00:12:23.903 "cntlid": 89, 00:12:23.903 "qid": 0, 00:12:23.903 "state": "enabled", 00:12:23.903 "thread": "nvmf_tgt_poll_group_000", 00:12:23.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:23.903 "listen_address": { 00:12:23.903 "trtype": "TCP", 00:12:23.903 "adrfam": "IPv4", 00:12:23.903 "traddr": "10.0.0.3", 00:12:23.903 "trsvcid": "4420" 00:12:23.903 }, 00:12:23.903 "peer_address": { 00:12:23.903 "trtype": "TCP", 00:12:23.903 "adrfam": "IPv4", 00:12:23.903 "traddr": "10.0.0.1", 00:12:23.903 "trsvcid": "35260" 00:12:23.903 }, 00:12:23.903 "auth": { 00:12:23.903 "state": "completed", 00:12:23.903 "digest": "sha384", 00:12:23.903 "dhgroup": "ffdhe8192" 00:12:23.903 } 00:12:23.903 } 00:12:23.903 ]' 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.903 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.162 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:24.162 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.162 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.162 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.162 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.420 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:24.420 11:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:24.986 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.987 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:24.987 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.987 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.987 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.987 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.987 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:24.987 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.246 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.814 00:12:25.814 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.814 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.814 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.073 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.073 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.073 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.073 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.073 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.073 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.073 { 00:12:26.073 "cntlid": 91, 00:12:26.073 "qid": 0, 00:12:26.073 "state": "enabled", 00:12:26.073 "thread": "nvmf_tgt_poll_group_000", 00:12:26.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:26.073 "listen_address": { 00:12:26.073 "trtype": "TCP", 00:12:26.073 "adrfam": "IPv4", 00:12:26.073 "traddr": "10.0.0.3", 00:12:26.073 "trsvcid": "4420" 00:12:26.073 }, 00:12:26.073 "peer_address": { 00:12:26.073 "trtype": "TCP", 00:12:26.073 "adrfam": "IPv4", 00:12:26.073 "traddr": "10.0.0.1", 00:12:26.073 "trsvcid": "35290" 00:12:26.073 }, 00:12:26.073 "auth": { 00:12:26.073 "state": "completed", 00:12:26.073 "digest": "sha384", 00:12:26.073 "dhgroup": "ffdhe8192" 00:12:26.073 } 00:12:26.073 } 00:12:26.073 ]' 00:12:26.073 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.333 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.333 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.333 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.333 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.333 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.333 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.333 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.591 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:26.591 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:27.158 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.158 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:27.158 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.158 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.158 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.158 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.158 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:27.158 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:27.417 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:27.417 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.417 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:27.417 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:27.417 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:27.417 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.417 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.417 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.417 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.675 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.675 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.675 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.676 11:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.244 00:12:28.244 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.244 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.244 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.503 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.503 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.503 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.503 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.504 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.504 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.504 { 00:12:28.504 "cntlid": 93, 00:12:28.504 "qid": 0, 00:12:28.504 "state": "enabled", 00:12:28.504 "thread": "nvmf_tgt_poll_group_000", 00:12:28.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:28.504 "listen_address": { 00:12:28.504 "trtype": "TCP", 00:12:28.504 "adrfam": "IPv4", 00:12:28.504 "traddr": "10.0.0.3", 00:12:28.504 "trsvcid": "4420" 00:12:28.504 }, 00:12:28.504 "peer_address": { 00:12:28.504 "trtype": "TCP", 00:12:28.504 "adrfam": "IPv4", 00:12:28.504 "traddr": "10.0.0.1", 00:12:28.504 "trsvcid": "35324" 00:12:28.504 }, 00:12:28.504 "auth": { 00:12:28.504 "state": "completed", 00:12:28.504 "digest": "sha384", 00:12:28.504 "dhgroup": "ffdhe8192" 00:12:28.504 } 00:12:28.504 } 00:12:28.504 ]' 00:12:28.504 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.504 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.504 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.504 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.504 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.764 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.764 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.764 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.023 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:29.023 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:29.592 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.592 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:29.592 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.592 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.592 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.592 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.592 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.592 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.851 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.419 00:12:30.419 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.419 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.419 11:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.988 { 00:12:30.988 "cntlid": 95, 00:12:30.988 "qid": 0, 00:12:30.988 "state": "enabled", 00:12:30.988 "thread": "nvmf_tgt_poll_group_000", 00:12:30.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:30.988 "listen_address": { 00:12:30.988 "trtype": "TCP", 00:12:30.988 "adrfam": "IPv4", 00:12:30.988 "traddr": "10.0.0.3", 00:12:30.988 "trsvcid": "4420" 00:12:30.988 }, 00:12:30.988 "peer_address": { 00:12:30.988 "trtype": "TCP", 00:12:30.988 "adrfam": "IPv4", 00:12:30.988 "traddr": "10.0.0.1", 00:12:30.988 "trsvcid": "41426" 00:12:30.988 }, 00:12:30.988 "auth": { 00:12:30.988 "state": "completed", 00:12:30.988 "digest": "sha384", 00:12:30.988 "dhgroup": "ffdhe8192" 00:12:30.988 } 00:12:30.988 } 00:12:30.988 ]' 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.988 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.247 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:31.248 11:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:32.185 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.185 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:32.185 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.185 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.185 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.185 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:32.185 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.185 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.185 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:32.185 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.445 11:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.704 00:12:32.704 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.704 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.704 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.964 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.964 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.964 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.964 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.964 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.964 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.964 { 00:12:32.964 "cntlid": 97, 00:12:32.964 "qid": 0, 00:12:32.964 "state": "enabled", 00:12:32.964 "thread": "nvmf_tgt_poll_group_000", 00:12:32.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:32.964 "listen_address": { 00:12:32.964 "trtype": "TCP", 00:12:32.964 "adrfam": "IPv4", 00:12:32.964 "traddr": "10.0.0.3", 00:12:32.964 "trsvcid": "4420" 00:12:32.964 }, 00:12:32.964 "peer_address": { 00:12:32.964 "trtype": "TCP", 00:12:32.964 "adrfam": "IPv4", 00:12:32.964 "traddr": "10.0.0.1", 00:12:32.964 "trsvcid": "41446" 00:12:32.964 }, 00:12:32.964 "auth": { 00:12:32.964 "state": "completed", 00:12:32.964 "digest": "sha512", 00:12:32.964 "dhgroup": "null" 00:12:32.964 } 00:12:32.964 } 00:12:32.964 ]' 00:12:32.964 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.964 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:32.964 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.223 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:33.223 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.223 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.223 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.223 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.482 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:33.482 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:34.050 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.050 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:34.050 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.050 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.050 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.050 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.050 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:34.050 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.617 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.875 00:12:34.875 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.875 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.875 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.134 { 00:12:35.134 "cntlid": 99, 00:12:35.134 "qid": 0, 00:12:35.134 "state": "enabled", 00:12:35.134 "thread": "nvmf_tgt_poll_group_000", 00:12:35.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:35.134 "listen_address": { 00:12:35.134 "trtype": "TCP", 00:12:35.134 "adrfam": "IPv4", 00:12:35.134 "traddr": "10.0.0.3", 00:12:35.134 "trsvcid": "4420" 00:12:35.134 }, 00:12:35.134 "peer_address": { 00:12:35.134 "trtype": "TCP", 00:12:35.134 "adrfam": "IPv4", 00:12:35.134 "traddr": "10.0.0.1", 00:12:35.134 "trsvcid": "41476" 00:12:35.134 }, 00:12:35.134 "auth": { 00:12:35.134 "state": "completed", 00:12:35.134 "digest": "sha512", 00:12:35.134 "dhgroup": "null" 00:12:35.134 } 00:12:35.134 } 00:12:35.134 ]' 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:35.134 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.393 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.393 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.393 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.653 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:35.653 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:36.225 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.225 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:36.226 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.226 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.226 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.226 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.226 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:36.226 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.485 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.052 00:12:37.052 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.052 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.052 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.310 { 00:12:37.310 "cntlid": 101, 00:12:37.310 "qid": 0, 00:12:37.310 "state": "enabled", 00:12:37.310 "thread": "nvmf_tgt_poll_group_000", 00:12:37.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:37.310 "listen_address": { 00:12:37.310 "trtype": "TCP", 00:12:37.310 "adrfam": "IPv4", 00:12:37.310 "traddr": "10.0.0.3", 00:12:37.310 "trsvcid": "4420" 00:12:37.310 }, 00:12:37.310 "peer_address": { 00:12:37.310 "trtype": "TCP", 00:12:37.310 "adrfam": "IPv4", 00:12:37.310 "traddr": "10.0.0.1", 00:12:37.310 "trsvcid": "41508" 00:12:37.310 }, 00:12:37.310 "auth": { 00:12:37.310 "state": "completed", 00:12:37.310 "digest": "sha512", 00:12:37.310 "dhgroup": "null" 00:12:37.310 } 00:12:37.310 } 00:12:37.310 ]' 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.310 11:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.877 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:37.877 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:38.445 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.445 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:38.445 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.445 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.445 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.445 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.445 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:38.445 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.703 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.268 00:12:39.268 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.268 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.268 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.527 { 00:12:39.527 "cntlid": 103, 00:12:39.527 "qid": 0, 00:12:39.527 "state": "enabled", 00:12:39.527 "thread": "nvmf_tgt_poll_group_000", 00:12:39.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:39.527 "listen_address": { 00:12:39.527 "trtype": "TCP", 00:12:39.527 "adrfam": "IPv4", 00:12:39.527 "traddr": "10.0.0.3", 00:12:39.527 "trsvcid": "4420" 00:12:39.527 }, 00:12:39.527 "peer_address": { 00:12:39.527 "trtype": "TCP", 00:12:39.527 "adrfam": "IPv4", 00:12:39.527 "traddr": "10.0.0.1", 00:12:39.527 "trsvcid": "41528" 00:12:39.527 }, 00:12:39.527 "auth": { 00:12:39.527 "state": "completed", 00:12:39.527 "digest": "sha512", 00:12:39.527 "dhgroup": "null" 00:12:39.527 } 00:12:39.527 } 00:12:39.527 ]' 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.527 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.786 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:39.786 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:40.354 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.354 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:40.354 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.613 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.613 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.613 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:40.613 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.613 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:40.613 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.873 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.132 00:12:41.132 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.132 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.132 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.390 { 00:12:41.390 "cntlid": 105, 00:12:41.390 "qid": 0, 00:12:41.390 "state": "enabled", 00:12:41.390 "thread": "nvmf_tgt_poll_group_000", 00:12:41.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:41.390 "listen_address": { 00:12:41.390 "trtype": "TCP", 00:12:41.390 "adrfam": "IPv4", 00:12:41.390 "traddr": "10.0.0.3", 00:12:41.390 "trsvcid": "4420" 00:12:41.390 }, 00:12:41.390 "peer_address": { 00:12:41.390 "trtype": "TCP", 00:12:41.390 "adrfam": "IPv4", 00:12:41.390 "traddr": "10.0.0.1", 00:12:41.390 "trsvcid": "49952" 00:12:41.390 }, 00:12:41.390 "auth": { 00:12:41.390 "state": "completed", 00:12:41.390 "digest": "sha512", 00:12:41.390 "dhgroup": "ffdhe2048" 00:12:41.390 } 00:12:41.390 } 00:12:41.390 ]' 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:41.390 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.649 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.649 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.649 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.907 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:41.907 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:42.476 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.476 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:42.476 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.476 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.476 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.476 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.476 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:42.476 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.735 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.994 00:12:42.995 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.995 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.995 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.562 { 00:12:43.562 "cntlid": 107, 00:12:43.562 "qid": 0, 00:12:43.562 "state": "enabled", 00:12:43.562 "thread": "nvmf_tgt_poll_group_000", 00:12:43.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:43.562 "listen_address": { 00:12:43.562 "trtype": "TCP", 00:12:43.562 "adrfam": "IPv4", 00:12:43.562 "traddr": "10.0.0.3", 00:12:43.562 "trsvcid": "4420" 00:12:43.562 }, 00:12:43.562 "peer_address": { 00:12:43.562 "trtype": "TCP", 00:12:43.562 "adrfam": "IPv4", 00:12:43.562 "traddr": "10.0.0.1", 00:12:43.562 "trsvcid": "49972" 00:12:43.562 }, 00:12:43.562 "auth": { 00:12:43.562 "state": "completed", 00:12:43.562 "digest": "sha512", 00:12:43.562 "dhgroup": "ffdhe2048" 00:12:43.562 } 00:12:43.562 } 00:12:43.562 ]' 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.562 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.563 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.822 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:43.822 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:44.388 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.388 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:44.388 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.388 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.388 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.388 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.388 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:44.388 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.646 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.905 00:12:44.905 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.905 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.905 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.163 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.163 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.164 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.164 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.164 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.164 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.164 { 00:12:45.164 "cntlid": 109, 00:12:45.164 "qid": 0, 00:12:45.164 "state": "enabled", 00:12:45.164 "thread": "nvmf_tgt_poll_group_000", 00:12:45.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:45.164 "listen_address": { 00:12:45.164 "trtype": "TCP", 00:12:45.164 "adrfam": "IPv4", 00:12:45.164 "traddr": "10.0.0.3", 00:12:45.164 "trsvcid": "4420" 00:12:45.164 }, 00:12:45.164 "peer_address": { 00:12:45.164 "trtype": "TCP", 00:12:45.164 "adrfam": "IPv4", 00:12:45.164 "traddr": "10.0.0.1", 00:12:45.164 "trsvcid": "49982" 00:12:45.164 }, 00:12:45.164 "auth": { 00:12:45.164 "state": "completed", 00:12:45.164 "digest": "sha512", 00:12:45.164 "dhgroup": "ffdhe2048" 00:12:45.164 } 00:12:45.164 } 00:12:45.164 ]' 00:12:45.164 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.423 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.423 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.423 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:45.423 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.423 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.423 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.423 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.682 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:45.682 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:46.250 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.250 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:46.250 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.250 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.250 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.250 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.250 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:46.250 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.509 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.079 00:12:47.079 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.079 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.079 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.079 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.079 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.079 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.079 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.079 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.079 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.079 { 00:12:47.079 "cntlid": 111, 00:12:47.079 "qid": 0, 00:12:47.079 "state": "enabled", 00:12:47.079 "thread": "nvmf_tgt_poll_group_000", 00:12:47.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:47.080 "listen_address": { 00:12:47.080 "trtype": "TCP", 00:12:47.080 "adrfam": "IPv4", 00:12:47.080 "traddr": "10.0.0.3", 00:12:47.080 "trsvcid": "4420" 00:12:47.080 }, 00:12:47.080 "peer_address": { 00:12:47.080 "trtype": "TCP", 00:12:47.080 "adrfam": "IPv4", 00:12:47.080 "traddr": "10.0.0.1", 00:12:47.080 "trsvcid": "50006" 00:12:47.080 }, 00:12:47.080 "auth": { 00:12:47.080 "state": "completed", 00:12:47.080 "digest": "sha512", 00:12:47.080 "dhgroup": "ffdhe2048" 00:12:47.080 } 00:12:47.080 } 00:12:47.080 ]' 00:12:47.080 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.341 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.341 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.341 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:47.341 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.341 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.341 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.341 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.600 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:47.600 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:48.168 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.168 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:48.168 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.168 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.168 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.168 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.168 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.168 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:48.168 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.428 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.997 00:12:48.997 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.997 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.997 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.997 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.997 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.997 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.997 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.997 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.997 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.997 { 00:12:48.997 "cntlid": 113, 00:12:48.997 "qid": 0, 00:12:48.997 "state": "enabled", 00:12:48.997 "thread": "nvmf_tgt_poll_group_000", 00:12:48.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:48.997 "listen_address": { 00:12:48.997 "trtype": "TCP", 00:12:48.997 "adrfam": "IPv4", 00:12:48.997 "traddr": "10.0.0.3", 00:12:48.997 "trsvcid": "4420" 00:12:48.997 }, 00:12:48.997 "peer_address": { 00:12:48.997 "trtype": "TCP", 00:12:48.997 "adrfam": "IPv4", 00:12:48.997 "traddr": "10.0.0.1", 00:12:48.997 "trsvcid": "50036" 00:12:48.997 }, 00:12:48.997 "auth": { 00:12:48.997 "state": "completed", 00:12:48.997 "digest": "sha512", 00:12:48.997 "dhgroup": "ffdhe3072" 00:12:48.997 } 00:12:48.997 } 00:12:48.997 ]' 00:12:48.997 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.256 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.256 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.256 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:49.256 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.256 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.256 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.256 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.515 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:49.515 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:50.093 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.093 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:50.093 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.093 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.093 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.093 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.093 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:50.093 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:50.352 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:50.352 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.352 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:50.352 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:50.352 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:50.352 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.352 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.352 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.352 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.612 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.612 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.612 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.612 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.871 00:12:50.871 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.871 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.871 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.131 { 00:12:51.131 "cntlid": 115, 00:12:51.131 "qid": 0, 00:12:51.131 "state": "enabled", 00:12:51.131 "thread": "nvmf_tgt_poll_group_000", 00:12:51.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:51.131 "listen_address": { 00:12:51.131 "trtype": "TCP", 00:12:51.131 "adrfam": "IPv4", 00:12:51.131 "traddr": "10.0.0.3", 00:12:51.131 "trsvcid": "4420" 00:12:51.131 }, 00:12:51.131 "peer_address": { 00:12:51.131 "trtype": "TCP", 00:12:51.131 "adrfam": "IPv4", 00:12:51.131 "traddr": "10.0.0.1", 00:12:51.131 "trsvcid": "34170" 00:12:51.131 }, 00:12:51.131 "auth": { 00:12:51.131 "state": "completed", 00:12:51.131 "digest": "sha512", 00:12:51.131 "dhgroup": "ffdhe3072" 00:12:51.131 } 00:12:51.131 } 00:12:51.131 ]' 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.131 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.390 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.390 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.390 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.650 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:51.650 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:52.219 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.219 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:52.219 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.219 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.219 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.219 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.219 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:52.219 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.787 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.046 00:12:53.046 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.046 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.046 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.306 { 00:12:53.306 "cntlid": 117, 00:12:53.306 "qid": 0, 00:12:53.306 "state": "enabled", 00:12:53.306 "thread": "nvmf_tgt_poll_group_000", 00:12:53.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:53.306 "listen_address": { 00:12:53.306 "trtype": "TCP", 00:12:53.306 "adrfam": "IPv4", 00:12:53.306 "traddr": "10.0.0.3", 00:12:53.306 "trsvcid": "4420" 00:12:53.306 }, 00:12:53.306 "peer_address": { 00:12:53.306 "trtype": "TCP", 00:12:53.306 "adrfam": "IPv4", 00:12:53.306 "traddr": "10.0.0.1", 00:12:53.306 "trsvcid": "34200" 00:12:53.306 }, 00:12:53.306 "auth": { 00:12:53.306 "state": "completed", 00:12:53.306 "digest": "sha512", 00:12:53.306 "dhgroup": "ffdhe3072" 00:12:53.306 } 00:12:53.306 } 00:12:53.306 ]' 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:53.306 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.564 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.564 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.564 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.822 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:53.822 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:12:54.386 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.386 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:54.386 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.386 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.386 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.386 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.386 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:54.386 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:54.644 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:54.644 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.644 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.644 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:54.644 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.645 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.645 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:12:54.645 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.645 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.645 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.645 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.645 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.645 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.903 00:12:54.903 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.903 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.903 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.161 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.161 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.161 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.161 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.161 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.161 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.161 { 00:12:55.161 "cntlid": 119, 00:12:55.161 "qid": 0, 00:12:55.161 "state": "enabled", 00:12:55.161 "thread": "nvmf_tgt_poll_group_000", 00:12:55.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:55.161 "listen_address": { 00:12:55.161 "trtype": "TCP", 00:12:55.161 "adrfam": "IPv4", 00:12:55.161 "traddr": "10.0.0.3", 00:12:55.161 "trsvcid": "4420" 00:12:55.161 }, 00:12:55.161 "peer_address": { 00:12:55.161 "trtype": "TCP", 00:12:55.161 "adrfam": "IPv4", 00:12:55.162 "traddr": "10.0.0.1", 00:12:55.162 "trsvcid": "34232" 00:12:55.162 }, 00:12:55.162 "auth": { 00:12:55.162 "state": "completed", 00:12:55.162 "digest": "sha512", 00:12:55.162 "dhgroup": "ffdhe3072" 00:12:55.162 } 00:12:55.162 } 00:12:55.162 ]' 00:12:55.162 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.162 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.420 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.420 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:55.420 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.420 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.420 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.420 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.682 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:55.682 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:12:56.262 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.262 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:56.262 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.262 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.262 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.262 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.262 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.262 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:56.262 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:56.829 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:56.829 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.829 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.829 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:56.829 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.829 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.830 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.830 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.830 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.830 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.830 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.830 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.830 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.088 00:12:57.088 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.088 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.088 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.347 { 00:12:57.347 "cntlid": 121, 00:12:57.347 "qid": 0, 00:12:57.347 "state": "enabled", 00:12:57.347 "thread": "nvmf_tgt_poll_group_000", 00:12:57.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:57.347 "listen_address": { 00:12:57.347 "trtype": "TCP", 00:12:57.347 "adrfam": "IPv4", 00:12:57.347 "traddr": "10.0.0.3", 00:12:57.347 "trsvcid": "4420" 00:12:57.347 }, 00:12:57.347 "peer_address": { 00:12:57.347 "trtype": "TCP", 00:12:57.347 "adrfam": "IPv4", 00:12:57.347 "traddr": "10.0.0.1", 00:12:57.347 "trsvcid": "34266" 00:12:57.347 }, 00:12:57.347 "auth": { 00:12:57.347 "state": "completed", 00:12:57.347 "digest": "sha512", 00:12:57.347 "dhgroup": "ffdhe4096" 00:12:57.347 } 00:12:57.347 } 00:12:57.347 ]' 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:57.347 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.606 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.606 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.606 11:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.864 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:57.864 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:12:58.430 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.430 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:12:58.430 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.430 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.430 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.430 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.430 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:58.430 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.690 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.949 00:12:59.208 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.208 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.208 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.467 { 00:12:59.467 "cntlid": 123, 00:12:59.467 "qid": 0, 00:12:59.467 "state": "enabled", 00:12:59.467 "thread": "nvmf_tgt_poll_group_000", 00:12:59.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:12:59.467 "listen_address": { 00:12:59.467 "trtype": "TCP", 00:12:59.467 "adrfam": "IPv4", 00:12:59.467 "traddr": "10.0.0.3", 00:12:59.467 "trsvcid": "4420" 00:12:59.467 }, 00:12:59.467 "peer_address": { 00:12:59.467 "trtype": "TCP", 00:12:59.467 "adrfam": "IPv4", 00:12:59.467 "traddr": "10.0.0.1", 00:12:59.467 "trsvcid": "34292" 00:12:59.467 }, 00:12:59.467 "auth": { 00:12:59.467 "state": "completed", 00:12:59.467 "digest": "sha512", 00:12:59.467 "dhgroup": "ffdhe4096" 00:12:59.467 } 00:12:59.467 } 00:12:59.467 ]' 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.467 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.726 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:12:59.726 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:13:00.664 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.664 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:00.664 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.664 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.664 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.664 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.664 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:00.664 11:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.923 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.182 00:13:01.182 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.182 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.182 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.442 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.442 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.442 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.442 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.709 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.709 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.709 { 00:13:01.709 "cntlid": 125, 00:13:01.709 "qid": 0, 00:13:01.709 "state": "enabled", 00:13:01.709 "thread": "nvmf_tgt_poll_group_000", 00:13:01.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:01.709 "listen_address": { 00:13:01.709 "trtype": "TCP", 00:13:01.709 "adrfam": "IPv4", 00:13:01.709 "traddr": "10.0.0.3", 00:13:01.709 "trsvcid": "4420" 00:13:01.709 }, 00:13:01.709 "peer_address": { 00:13:01.709 "trtype": "TCP", 00:13:01.709 "adrfam": "IPv4", 00:13:01.709 "traddr": "10.0.0.1", 00:13:01.709 "trsvcid": "33206" 00:13:01.709 }, 00:13:01.709 "auth": { 00:13:01.709 "state": "completed", 00:13:01.709 "digest": "sha512", 00:13:01.709 "dhgroup": "ffdhe4096" 00:13:01.709 } 00:13:01.709 } 00:13:01.709 ]' 00:13:01.709 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.709 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.709 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.709 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:01.709 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.709 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.709 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.709 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.967 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:13:01.967 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:13:02.902 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.902 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:02.902 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.902 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.902 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.902 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.902 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:02.902 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:03.162 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:03.421 00:13:03.421 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.421 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.421 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.680 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.680 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.680 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.680 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.680 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.680 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.680 { 00:13:03.680 "cntlid": 127, 00:13:03.680 "qid": 0, 00:13:03.680 "state": "enabled", 00:13:03.680 "thread": "nvmf_tgt_poll_group_000", 00:13:03.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:03.680 "listen_address": { 00:13:03.680 "trtype": "TCP", 00:13:03.680 "adrfam": "IPv4", 00:13:03.680 "traddr": "10.0.0.3", 00:13:03.680 "trsvcid": "4420" 00:13:03.680 }, 00:13:03.680 "peer_address": { 00:13:03.680 "trtype": "TCP", 00:13:03.680 "adrfam": "IPv4", 00:13:03.680 "traddr": "10.0.0.1", 00:13:03.680 "trsvcid": "33222" 00:13:03.680 }, 00:13:03.680 "auth": { 00:13:03.680 "state": "completed", 00:13:03.680 "digest": "sha512", 00:13:03.680 "dhgroup": "ffdhe4096" 00:13:03.680 } 00:13:03.680 } 00:13:03.680 ]' 00:13:03.680 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.939 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.939 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.939 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:03.939 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.939 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.939 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.939 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.199 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:13:04.199 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:13:05.134 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.134 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:05.134 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.134 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.134 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.134 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:05.134 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.134 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.134 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.394 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.653 00:13:05.913 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.913 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.913 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.172 { 00:13:06.172 "cntlid": 129, 00:13:06.172 "qid": 0, 00:13:06.172 "state": "enabled", 00:13:06.172 "thread": "nvmf_tgt_poll_group_000", 00:13:06.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:06.172 "listen_address": { 00:13:06.172 "trtype": "TCP", 00:13:06.172 "adrfam": "IPv4", 00:13:06.172 "traddr": "10.0.0.3", 00:13:06.172 "trsvcid": "4420" 00:13:06.172 }, 00:13:06.172 "peer_address": { 00:13:06.172 "trtype": "TCP", 00:13:06.172 "adrfam": "IPv4", 00:13:06.172 "traddr": "10.0.0.1", 00:13:06.172 "trsvcid": "33240" 00:13:06.172 }, 00:13:06.172 "auth": { 00:13:06.172 "state": "completed", 00:13:06.172 "digest": "sha512", 00:13:06.172 "dhgroup": "ffdhe6144" 00:13:06.172 } 00:13:06.172 } 00:13:06.172 ]' 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.172 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.431 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:13:06.431 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:13:07.000 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.000 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:07.000 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.000 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.000 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.000 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.000 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:07.000 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.277 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.870 00:13:07.870 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.870 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.870 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.130 { 00:13:08.130 "cntlid": 131, 00:13:08.130 "qid": 0, 00:13:08.130 "state": "enabled", 00:13:08.130 "thread": "nvmf_tgt_poll_group_000", 00:13:08.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:08.130 "listen_address": { 00:13:08.130 "trtype": "TCP", 00:13:08.130 "adrfam": "IPv4", 00:13:08.130 "traddr": "10.0.0.3", 00:13:08.130 "trsvcid": "4420" 00:13:08.130 }, 00:13:08.130 "peer_address": { 00:13:08.130 "trtype": "TCP", 00:13:08.130 "adrfam": "IPv4", 00:13:08.130 "traddr": "10.0.0.1", 00:13:08.130 "trsvcid": "33272" 00:13:08.130 }, 00:13:08.130 "auth": { 00:13:08.130 "state": "completed", 00:13:08.130 "digest": "sha512", 00:13:08.130 "dhgroup": "ffdhe6144" 00:13:08.130 } 00:13:08.130 } 00:13:08.130 ]' 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.130 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.389 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:13:08.389 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:13:08.957 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.957 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:08.957 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.957 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.216 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.475 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.475 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.475 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.475 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.042 00:13:10.042 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.042 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.043 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.301 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.301 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.301 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.301 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.301 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.301 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.301 { 00:13:10.301 "cntlid": 133, 00:13:10.301 "qid": 0, 00:13:10.301 "state": "enabled", 00:13:10.301 "thread": "nvmf_tgt_poll_group_000", 00:13:10.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:10.301 "listen_address": { 00:13:10.301 "trtype": "TCP", 00:13:10.301 "adrfam": "IPv4", 00:13:10.301 "traddr": "10.0.0.3", 00:13:10.301 "trsvcid": "4420" 00:13:10.301 }, 00:13:10.301 "peer_address": { 00:13:10.302 "trtype": "TCP", 00:13:10.302 "adrfam": "IPv4", 00:13:10.302 "traddr": "10.0.0.1", 00:13:10.302 "trsvcid": "34310" 00:13:10.302 }, 00:13:10.302 "auth": { 00:13:10.302 "state": "completed", 00:13:10.302 "digest": "sha512", 00:13:10.302 "dhgroup": "ffdhe6144" 00:13:10.302 } 00:13:10.302 } 00:13:10.302 ]' 00:13:10.302 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.302 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.302 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.302 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.302 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.561 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.561 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.561 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.819 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:13:10.819 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:13:11.387 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.387 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:11.387 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.387 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.387 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.387 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.387 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:11.387 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:11.646 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:11.646 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.646 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:11.646 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:11.646 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:11.646 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.646 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:13:11.646 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.646 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.906 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.906 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:11.906 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.906 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:12.165 00:13:12.165 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.165 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.165 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.424 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.424 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.424 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.424 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.424 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.424 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.424 { 00:13:12.424 "cntlid": 135, 00:13:12.424 "qid": 0, 00:13:12.424 "state": "enabled", 00:13:12.424 "thread": "nvmf_tgt_poll_group_000", 00:13:12.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:12.424 "listen_address": { 00:13:12.424 "trtype": "TCP", 00:13:12.424 "adrfam": "IPv4", 00:13:12.424 "traddr": "10.0.0.3", 00:13:12.424 "trsvcid": "4420" 00:13:12.424 }, 00:13:12.424 "peer_address": { 00:13:12.424 "trtype": "TCP", 00:13:12.424 "adrfam": "IPv4", 00:13:12.424 "traddr": "10.0.0.1", 00:13:12.424 "trsvcid": "34338" 00:13:12.424 }, 00:13:12.424 "auth": { 00:13:12.424 "state": "completed", 00:13:12.424 "digest": "sha512", 00:13:12.424 "dhgroup": "ffdhe6144" 00:13:12.424 } 00:13:12.424 } 00:13:12.424 ]' 00:13:12.424 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.684 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.684 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.684 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:12.684 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.684 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.684 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.684 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.942 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:13:12.942 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:13:13.542 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.542 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:13.542 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.542 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.542 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.542 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:13.542 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.542 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:13.542 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.802 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.369 00:13:14.369 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.369 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.369 11:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.936 { 00:13:14.936 "cntlid": 137, 00:13:14.936 "qid": 0, 00:13:14.936 "state": "enabled", 00:13:14.936 "thread": "nvmf_tgt_poll_group_000", 00:13:14.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:14.936 "listen_address": { 00:13:14.936 "trtype": "TCP", 00:13:14.936 "adrfam": "IPv4", 00:13:14.936 "traddr": "10.0.0.3", 00:13:14.936 "trsvcid": "4420" 00:13:14.936 }, 00:13:14.936 "peer_address": { 00:13:14.936 "trtype": "TCP", 00:13:14.936 "adrfam": "IPv4", 00:13:14.936 "traddr": "10.0.0.1", 00:13:14.936 "trsvcid": "34376" 00:13:14.936 }, 00:13:14.936 "auth": { 00:13:14.936 "state": "completed", 00:13:14.936 "digest": "sha512", 00:13:14.936 "dhgroup": "ffdhe8192" 00:13:14.936 } 00:13:14.936 } 00:13:14.936 ]' 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.936 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.194 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:13:15.194 11:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:13:15.762 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.022 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.281 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.281 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.281 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.281 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.849 00:13:16.849 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.849 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.849 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.108 { 00:13:17.108 "cntlid": 139, 00:13:17.108 "qid": 0, 00:13:17.108 "state": "enabled", 00:13:17.108 "thread": "nvmf_tgt_poll_group_000", 00:13:17.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:17.108 "listen_address": { 00:13:17.108 "trtype": "TCP", 00:13:17.108 "adrfam": "IPv4", 00:13:17.108 "traddr": "10.0.0.3", 00:13:17.108 "trsvcid": "4420" 00:13:17.108 }, 00:13:17.108 "peer_address": { 00:13:17.108 "trtype": "TCP", 00:13:17.108 "adrfam": "IPv4", 00:13:17.108 "traddr": "10.0.0.1", 00:13:17.108 "trsvcid": "34418" 00:13:17.108 }, 00:13:17.108 "auth": { 00:13:17.108 "state": "completed", 00:13:17.108 "digest": "sha512", 00:13:17.108 "dhgroup": "ffdhe8192" 00:13:17.108 } 00:13:17.108 } 00:13:17.108 ]' 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.108 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.367 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:13:17.367 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: --dhchap-ctrl-secret DHHC-1:02:OWM5ODg0YzgzMWY5NDdhNmVmNjM5MDcxNzM0NzI4MGI4YzU3MzQ1MjQ2YzFjMGUwwvvgiA==: 00:13:18.302 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.302 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:18.302 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.302 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.302 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.302 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.302 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:18.302 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.561 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.128 00:13:19.128 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.128 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.128 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.388 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.388 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.388 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.388 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.388 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.388 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.388 { 00:13:19.388 "cntlid": 141, 00:13:19.388 "qid": 0, 00:13:19.388 "state": "enabled", 00:13:19.388 "thread": "nvmf_tgt_poll_group_000", 00:13:19.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:19.388 "listen_address": { 00:13:19.388 "trtype": "TCP", 00:13:19.388 "adrfam": "IPv4", 00:13:19.388 "traddr": "10.0.0.3", 00:13:19.388 "trsvcid": "4420" 00:13:19.388 }, 00:13:19.388 "peer_address": { 00:13:19.388 "trtype": "TCP", 00:13:19.388 "adrfam": "IPv4", 00:13:19.388 "traddr": "10.0.0.1", 00:13:19.388 "trsvcid": "34446" 00:13:19.388 }, 00:13:19.388 "auth": { 00:13:19.388 "state": "completed", 00:13:19.388 "digest": "sha512", 00:13:19.388 "dhgroup": "ffdhe8192" 00:13:19.388 } 00:13:19.388 } 00:13:19.388 ]' 00:13:19.388 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.388 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.388 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.648 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:19.648 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.648 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.648 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.648 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.907 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:13:19.907 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:01:NDIwZTA2MjM3ZDU0MTljMDAyMGEzYjNlNDc5ODQzYTb/wAhU: 00:13:20.843 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.843 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:20.843 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.843 11:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:20.843 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:21.780 00:13:21.780 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.780 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.780 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.780 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.780 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.780 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.780 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.780 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.780 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.780 { 00:13:21.780 "cntlid": 143, 00:13:21.780 "qid": 0, 00:13:21.780 "state": "enabled", 00:13:21.780 "thread": "nvmf_tgt_poll_group_000", 00:13:21.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:21.780 "listen_address": { 00:13:21.780 "trtype": "TCP", 00:13:21.780 "adrfam": "IPv4", 00:13:21.780 "traddr": "10.0.0.3", 00:13:21.780 "trsvcid": "4420" 00:13:21.780 }, 00:13:21.780 "peer_address": { 00:13:21.780 "trtype": "TCP", 00:13:21.780 "adrfam": "IPv4", 00:13:21.780 "traddr": "10.0.0.1", 00:13:21.780 "trsvcid": "55248" 00:13:21.780 }, 00:13:21.780 "auth": { 00:13:21.780 "state": "completed", 00:13:21.780 "digest": "sha512", 00:13:21.780 "dhgroup": "ffdhe8192" 00:13:21.780 } 00:13:21.780 } 00:13:21.780 ]' 00:13:21.780 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.780 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.780 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.039 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:22.039 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.039 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.039 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.039 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.298 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:13:22.298 11:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:22.865 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:23.125 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:23.125 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.125 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:23.125 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:23.125 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:23.125 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.125 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.125 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.125 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.384 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.384 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.384 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.384 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.951 00:13:23.951 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.951 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.951 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.209 { 00:13:24.209 "cntlid": 145, 00:13:24.209 "qid": 0, 00:13:24.209 "state": "enabled", 00:13:24.209 "thread": "nvmf_tgt_poll_group_000", 00:13:24.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:24.209 "listen_address": { 00:13:24.209 "trtype": "TCP", 00:13:24.209 "adrfam": "IPv4", 00:13:24.209 "traddr": "10.0.0.3", 00:13:24.209 "trsvcid": "4420" 00:13:24.209 }, 00:13:24.209 "peer_address": { 00:13:24.209 "trtype": "TCP", 00:13:24.209 "adrfam": "IPv4", 00:13:24.209 "traddr": "10.0.0.1", 00:13:24.209 "trsvcid": "55274" 00:13:24.209 }, 00:13:24.209 "auth": { 00:13:24.209 "state": "completed", 00:13:24.209 "digest": "sha512", 00:13:24.209 "dhgroup": "ffdhe8192" 00:13:24.209 } 00:13:24.209 } 00:13:24.209 ]' 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:24.209 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.468 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.468 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.468 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.727 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:13:24.727 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:00:ZTNiYTlkMzEwNDkxMDhiNTQ5MzBlNGY3ZmVkYzgyNmVhMTYyMjFlOGQ5YzI4ZjhiUWqO1g==: --dhchap-ctrl-secret DHHC-1:03:MGZjN2RjZTUxMGM5ZjM1YWFjYTBmODRhYTI5ZTE5MTJjMzM5YTg4OTJkNzJhYTQ2NDM4MTk0NDFjMzNiYzUxZKgzRJU=: 00:13:25.294 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:25.295 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:25.863 request: 00:13:25.863 { 00:13:25.863 "name": "nvme0", 00:13:25.863 "trtype": "tcp", 00:13:25.863 "traddr": "10.0.0.3", 00:13:25.863 "adrfam": "ipv4", 00:13:25.863 "trsvcid": "4420", 00:13:25.863 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:25.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:25.863 "prchk_reftag": false, 00:13:25.863 "prchk_guard": false, 00:13:25.863 "hdgst": false, 00:13:25.863 "ddgst": false, 00:13:25.863 "dhchap_key": "key2", 00:13:25.863 "allow_unrecognized_csi": false, 00:13:25.863 "method": "bdev_nvme_attach_controller", 00:13:25.863 "req_id": 1 00:13:25.863 } 00:13:25.863 Got JSON-RPC error response 00:13:25.863 response: 00:13:25.863 { 00:13:25.863 "code": -5, 00:13:25.863 "message": "Input/output error" 00:13:25.863 } 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:25.863 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:26.431 request: 00:13:26.431 { 00:13:26.431 "name": "nvme0", 00:13:26.431 "trtype": "tcp", 00:13:26.431 "traddr": "10.0.0.3", 00:13:26.431 "adrfam": "ipv4", 00:13:26.431 "trsvcid": "4420", 00:13:26.431 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:26.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:26.431 "prchk_reftag": false, 00:13:26.431 "prchk_guard": false, 00:13:26.431 "hdgst": false, 00:13:26.431 "ddgst": false, 00:13:26.431 "dhchap_key": "key1", 00:13:26.431 "dhchap_ctrlr_key": "ckey2", 00:13:26.431 "allow_unrecognized_csi": false, 00:13:26.431 "method": "bdev_nvme_attach_controller", 00:13:26.431 "req_id": 1 00:13:26.431 } 00:13:26.431 Got JSON-RPC error response 00:13:26.431 response: 00:13:26.431 { 00:13:26.431 "code": -5, 00:13:26.431 "message": "Input/output error" 00:13:26.431 } 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.431 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.369 request: 00:13:27.369 { 00:13:27.369 "name": "nvme0", 00:13:27.369 "trtype": "tcp", 00:13:27.369 "traddr": "10.0.0.3", 00:13:27.369 "adrfam": "ipv4", 00:13:27.369 "trsvcid": "4420", 00:13:27.369 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:27.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:27.369 "prchk_reftag": false, 00:13:27.369 "prchk_guard": false, 00:13:27.369 "hdgst": false, 00:13:27.369 "ddgst": false, 00:13:27.369 "dhchap_key": "key1", 00:13:27.369 "dhchap_ctrlr_key": "ckey1", 00:13:27.369 "allow_unrecognized_csi": false, 00:13:27.369 "method": "bdev_nvme_attach_controller", 00:13:27.369 "req_id": 1 00:13:27.369 } 00:13:27.369 Got JSON-RPC error response 00:13:27.369 response: 00:13:27.369 { 00:13:27.369 "code": -5, 00:13:27.369 "message": "Input/output error" 00:13:27.369 } 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 80074 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 80074 ']' 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 80074 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80074 00:13:27.369 killing process with pid 80074 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80074' 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 80074 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 80074 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=83160 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 83160 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 83160 ']' 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.369 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 83160 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 83160 ']' 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.628 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.887 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.887 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:13:27.887 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:27.887 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.887 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.887 null0 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.VsZ 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Gst ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gst 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rVE 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.g6H ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.g6H 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8gv 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Jx7 ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Jx7 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Q2y 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:28.147 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.085 nvme0n1 00:13:29.085 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.085 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.085 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.344 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.344 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.344 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.344 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.344 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.344 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.344 { 00:13:29.344 "cntlid": 1, 00:13:29.344 "qid": 0, 00:13:29.344 "state": "enabled", 00:13:29.344 "thread": "nvmf_tgt_poll_group_000", 00:13:29.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:29.344 "listen_address": { 00:13:29.344 "trtype": "TCP", 00:13:29.344 "adrfam": "IPv4", 00:13:29.344 "traddr": "10.0.0.3", 00:13:29.345 "trsvcid": "4420" 00:13:29.345 }, 00:13:29.345 "peer_address": { 00:13:29.345 "trtype": "TCP", 00:13:29.345 "adrfam": "IPv4", 00:13:29.345 "traddr": "10.0.0.1", 00:13:29.345 "trsvcid": "55322" 00:13:29.345 }, 00:13:29.345 "auth": { 00:13:29.345 "state": "completed", 00:13:29.345 "digest": "sha512", 00:13:29.345 "dhgroup": "ffdhe8192" 00:13:29.345 } 00:13:29.345 } 00:13:29.345 ]' 00:13:29.345 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.345 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.345 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.604 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:29.604 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.604 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.604 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.604 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.862 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:13:29.862 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:13:30.430 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.688 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:30.688 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.688 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.688 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.688 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key3 00:13:30.688 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.688 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.688 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.688 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:30.688 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:30.947 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:30.947 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:30.947 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:30.947 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:30.947 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.947 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:30.947 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:30.947 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:30.947 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:30.947 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.205 request: 00:13:31.205 { 00:13:31.205 "name": "nvme0", 00:13:31.205 "trtype": "tcp", 00:13:31.205 "traddr": "10.0.0.3", 00:13:31.205 "adrfam": "ipv4", 00:13:31.205 "trsvcid": "4420", 00:13:31.205 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:31.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:31.206 "prchk_reftag": false, 00:13:31.206 "prchk_guard": false, 00:13:31.206 "hdgst": false, 00:13:31.206 "ddgst": false, 00:13:31.206 "dhchap_key": "key3", 00:13:31.206 "allow_unrecognized_csi": false, 00:13:31.206 "method": "bdev_nvme_attach_controller", 00:13:31.206 "req_id": 1 00:13:31.206 } 00:13:31.206 Got JSON-RPC error response 00:13:31.206 response: 00:13:31.206 { 00:13:31.206 "code": -5, 00:13:31.206 "message": "Input/output error" 00:13:31.206 } 00:13:31.206 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:31.206 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:31.206 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:31.206 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:31.206 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:31.206 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:31.206 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:31.206 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:31.465 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:31.465 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:31.465 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:31.465 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:31.465 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.465 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:31.465 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.465 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:31.465 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.465 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.724 request: 00:13:31.724 { 00:13:31.724 "name": "nvme0", 00:13:31.724 "trtype": "tcp", 00:13:31.724 "traddr": "10.0.0.3", 00:13:31.724 "adrfam": "ipv4", 00:13:31.724 "trsvcid": "4420", 00:13:31.724 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:31.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:31.724 "prchk_reftag": false, 00:13:31.724 "prchk_guard": false, 00:13:31.724 "hdgst": false, 00:13:31.724 "ddgst": false, 00:13:31.724 "dhchap_key": "key3", 00:13:31.724 "allow_unrecognized_csi": false, 00:13:31.724 "method": "bdev_nvme_attach_controller", 00:13:31.724 "req_id": 1 00:13:31.724 } 00:13:31.724 Got JSON-RPC error response 00:13:31.724 response: 00:13:31.724 { 00:13:31.724 "code": -5, 00:13:31.724 "message": "Input/output error" 00:13:31.724 } 00:13:31.724 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:31.724 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:31.724 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:31.724 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:31.724 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:31.724 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:31.724 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:31.724 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:31.724 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:31.724 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:31.983 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:32.551 request: 00:13:32.551 { 00:13:32.551 "name": "nvme0", 00:13:32.551 "trtype": "tcp", 00:13:32.551 "traddr": "10.0.0.3", 00:13:32.551 "adrfam": "ipv4", 00:13:32.551 "trsvcid": "4420", 00:13:32.551 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:32.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:32.551 "prchk_reftag": false, 00:13:32.551 "prchk_guard": false, 00:13:32.551 "hdgst": false, 00:13:32.551 "ddgst": false, 00:13:32.551 "dhchap_key": "key0", 00:13:32.551 "dhchap_ctrlr_key": "key1", 00:13:32.551 "allow_unrecognized_csi": false, 00:13:32.551 "method": "bdev_nvme_attach_controller", 00:13:32.551 "req_id": 1 00:13:32.551 } 00:13:32.551 Got JSON-RPC error response 00:13:32.551 response: 00:13:32.551 { 00:13:32.551 "code": -5, 00:13:32.551 "message": "Input/output error" 00:13:32.551 } 00:13:32.551 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:32.551 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:32.551 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:32.551 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:32.551 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:32.551 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:32.551 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:32.810 nvme0n1 00:13:32.810 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:32.810 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:32.810 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.069 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.069 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.069 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.329 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 00:13:33.329 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.329 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.329 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.329 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:33.329 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:33.329 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:34.273 nvme0n1 00:13:34.273 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:34.273 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.273 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:34.532 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.532 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:34.532 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.532 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.532 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.532 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:34.532 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.532 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:34.791 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.791 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:13:34.791 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid 61a87890-fef5-4d39-ae0e-c34cd0a177b6 -l 0 --dhchap-secret DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ2YmQzM2M0ZjI5ZDUwZmE3OTkzYjExYjYxNDlhZDlmYWE5Yzc2ZmY3MzkzOGViMWRhODRkNjc3OTY2MWNjMy9fm9M=: 00:13:35.729 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:35.729 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:35.729 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:35.729 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:35.729 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:35.729 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:35.729 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:35.729 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.729 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.729 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:35.729 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:35.729 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:35.729 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:35.729 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.729 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:35.729 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.729 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:35.729 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:35.729 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:36.297 request: 00:13:36.297 { 00:13:36.297 "name": "nvme0", 00:13:36.297 "trtype": "tcp", 00:13:36.297 "traddr": "10.0.0.3", 00:13:36.297 "adrfam": "ipv4", 00:13:36.297 "trsvcid": "4420", 00:13:36.297 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:36.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6", 00:13:36.297 "prchk_reftag": false, 00:13:36.297 "prchk_guard": false, 00:13:36.297 "hdgst": false, 00:13:36.297 "ddgst": false, 00:13:36.297 "dhchap_key": "key1", 00:13:36.297 "allow_unrecognized_csi": false, 00:13:36.297 "method": "bdev_nvme_attach_controller", 00:13:36.297 "req_id": 1 00:13:36.297 } 00:13:36.297 Got JSON-RPC error response 00:13:36.297 response: 00:13:36.297 { 00:13:36.297 "code": -5, 00:13:36.297 "message": "Input/output error" 00:13:36.297 } 00:13:36.557 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:36.557 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:36.557 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:36.557 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:36.557 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:36.557 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:36.557 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:37.499 nvme0n1 00:13:37.499 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:37.500 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:37.500 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.759 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.759 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.759 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.019 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:38.019 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.019 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.019 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.019 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:38.019 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:38.019 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:38.278 nvme0n1 00:13:38.278 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:38.278 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:38.278 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.537 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.537 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.537 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: '' 2s 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: ]] 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjQ3NTg3OTk0Mzk3ZTVkZjI0YWQ4Y2Q5MDNkNDQ1ZGQgSTFB: 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:38.796 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: 2s 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:41.332 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:41.333 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:41.333 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: ]] 00:13:41.333 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZjUzYjhhN2Q0YWUyYTg0ODVmZTAzODRlYWQ1Y2M4YTM4NmVlNTM2OTkyMjU4MzFjUI9/EQ==: 00:13:41.333 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:41.333 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:43.237 11:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:44.174 nvme0n1 00:13:44.174 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:44.174 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.174 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.174 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.174 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:44.174 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:44.754 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:44.754 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.754 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:45.014 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.014 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:45.014 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.014 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.014 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.014 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:45.014 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:45.273 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:45.273 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:45.273 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:45.532 11:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:46.099 request: 00:13:46.099 { 00:13:46.099 "name": "nvme0", 00:13:46.099 "dhchap_key": "key1", 00:13:46.099 "dhchap_ctrlr_key": "key3", 00:13:46.099 "method": "bdev_nvme_set_keys", 00:13:46.099 "req_id": 1 00:13:46.099 } 00:13:46.099 Got JSON-RPC error response 00:13:46.099 response: 00:13:46.099 { 00:13:46.099 "code": -13, 00:13:46.099 "message": "Permission denied" 00:13:46.099 } 00:13:46.099 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:46.099 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:46.099 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:46.099 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:46.099 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:46.099 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.099 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:46.668 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:46.668 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:47.606 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:47.606 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:47.606 11:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.866 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:47.866 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:47.866 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.866 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.866 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.866 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:47.866 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:47.866 11:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:48.802 nvme0n1 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:49.060 11:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:49.628 request: 00:13:49.628 { 00:13:49.628 "name": "nvme0", 00:13:49.628 "dhchap_key": "key2", 00:13:49.628 "dhchap_ctrlr_key": "key0", 00:13:49.628 "method": "bdev_nvme_set_keys", 00:13:49.628 "req_id": 1 00:13:49.628 } 00:13:49.628 Got JSON-RPC error response 00:13:49.628 response: 00:13:49.628 { 00:13:49.628 "code": -13, 00:13:49.628 "message": "Permission denied" 00:13:49.628 } 00:13:49.628 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:49.628 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:49.628 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:49.628 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:49.628 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:49.628 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:49.628 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.887 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:49.887 11:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 80103 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 80103 ']' 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 80103 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80103 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:51.265 killing process with pid 80103 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80103' 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 80103 00:13:51.265 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 80103 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.524 rmmod nvme_tcp 00:13:51.524 rmmod nvme_fabrics 00:13:51.524 rmmod nvme_keyring 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 83160 ']' 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 83160 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 83160 ']' 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 83160 00:13:51.524 11:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:13:51.524 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:51.524 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83160 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:51.784 killing process with pid 83160 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83160' 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 83160 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 83160 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:51.784 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.VsZ /tmp/spdk.key-sha256.rVE /tmp/spdk.key-sha384.8gv /tmp/spdk.key-sha512.Q2y /tmp/spdk.key-sha512.Gst /tmp/spdk.key-sha384.g6H /tmp/spdk.key-sha256.Jx7 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:52.043 00:13:52.043 real 3m11.074s 00:13:52.043 user 7m36.682s 00:13:52.043 sys 0m29.988s 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.043 ************************************ 00:13:52.043 END TEST nvmf_auth_target 00:13:52.043 ************************************ 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.043 ************************************ 00:13:52.043 START TEST nvmf_bdevio_no_huge 00:13:52.043 ************************************ 00:13:52.043 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:52.303 * Looking for test storage... 00:13:52.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.303 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:52.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.304 --rc genhtml_branch_coverage=1 00:13:52.304 --rc genhtml_function_coverage=1 00:13:52.304 --rc genhtml_legend=1 00:13:52.304 --rc geninfo_all_blocks=1 00:13:52.304 --rc geninfo_unexecuted_blocks=1 00:13:52.304 00:13:52.304 ' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:52.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.304 --rc genhtml_branch_coverage=1 00:13:52.304 --rc genhtml_function_coverage=1 00:13:52.304 --rc genhtml_legend=1 00:13:52.304 --rc geninfo_all_blocks=1 00:13:52.304 --rc geninfo_unexecuted_blocks=1 00:13:52.304 00:13:52.304 ' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:52.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.304 --rc genhtml_branch_coverage=1 00:13:52.304 --rc genhtml_function_coverage=1 00:13:52.304 --rc genhtml_legend=1 00:13:52.304 --rc geninfo_all_blocks=1 00:13:52.304 --rc geninfo_unexecuted_blocks=1 00:13:52.304 00:13:52.304 ' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:52.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.304 --rc genhtml_branch_coverage=1 00:13:52.304 --rc genhtml_function_coverage=1 00:13:52.304 --rc genhtml_legend=1 00:13:52.304 --rc geninfo_all_blocks=1 00:13:52.304 --rc geninfo_unexecuted_blocks=1 00:13:52.304 00:13:52.304 ' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.304 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:52.304 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:52.305 Cannot find device "nvmf_init_br" 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:52.305 Cannot find device "nvmf_init_br2" 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:52.305 Cannot find device "nvmf_tgt_br" 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:52.305 Cannot find device "nvmf_tgt_br2" 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:52.305 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:52.564 Cannot find device "nvmf_init_br" 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:52.564 Cannot find device "nvmf_init_br2" 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:52.564 Cannot find device "nvmf_tgt_br" 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:52.564 Cannot find device "nvmf_tgt_br2" 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:52.564 Cannot find device "nvmf_br" 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:52.564 Cannot find device "nvmf_init_if" 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:52.564 Cannot find device "nvmf_init_if2" 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:52.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:52.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:52.564 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:52.565 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:52.825 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:52.825 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:13:52.825 00:13:52.825 --- 10.0.0.3 ping statistics --- 00:13:52.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.825 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:52.825 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:52.825 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:13:52.825 00:13:52.825 --- 10.0.0.4 ping statistics --- 00:13:52.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.825 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:52.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:52.825 00:13:52.825 --- 10.0.0.1 ping statistics --- 00:13:52.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.825 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:52.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:13:52.825 00:13:52.825 --- 10.0.0.2 ping statistics --- 00:13:52.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.825 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=83796 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 83796 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 83796 ']' 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:52.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:52.825 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:52.825 [2024-10-29 11:02:58.231138] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:13:52.825 [2024-10-29 11:02:58.231254] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:53.085 [2024-10-29 11:02:58.393928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.085 [2024-10-29 11:02:58.448446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.085 [2024-10-29 11:02:58.448939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.085 [2024-10-29 11:02:58.449604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.085 [2024-10-29 11:02:58.450115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.085 [2024-10-29 11:02:58.450326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.085 [2024-10-29 11:02:58.451327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:53.085 [2024-10-29 11:02:58.451483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:53.085 [2024-10-29 11:02:58.451604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:53.085 [2024-10-29 11:02:58.451607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.085 [2024-10-29 11:02:58.458083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:54.021 [2024-10-29 11:02:59.256155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:54.021 Malloc0 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:54.021 [2024-10-29 11:02:59.296302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:54.021 { 00:13:54.021 "params": { 00:13:54.021 "name": "Nvme$subsystem", 00:13:54.021 "trtype": "$TEST_TRANSPORT", 00:13:54.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:54.021 "adrfam": "ipv4", 00:13:54.021 "trsvcid": "$NVMF_PORT", 00:13:54.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:54.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:54.021 "hdgst": ${hdgst:-false}, 00:13:54.021 "ddgst": ${ddgst:-false} 00:13:54.021 }, 00:13:54.021 "method": "bdev_nvme_attach_controller" 00:13:54.021 } 00:13:54.021 EOF 00:13:54.021 )") 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:54.021 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:54.021 "params": { 00:13:54.021 "name": "Nvme1", 00:13:54.021 "trtype": "tcp", 00:13:54.021 "traddr": "10.0.0.3", 00:13:54.021 "adrfam": "ipv4", 00:13:54.021 "trsvcid": "4420", 00:13:54.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:54.021 "hdgst": false, 00:13:54.021 "ddgst": false 00:13:54.021 }, 00:13:54.021 "method": "bdev_nvme_attach_controller" 00:13:54.021 }' 00:13:54.021 [2024-10-29 11:02:59.351433] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:13:54.021 [2024-10-29 11:02:59.351503] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83832 ] 00:13:54.021 [2024-10-29 11:02:59.501764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:54.288 [2024-10-29 11:02:59.562700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.288 [2024-10-29 11:02:59.562861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.288 [2024-10-29 11:02:59.562882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.288 [2024-10-29 11:02:59.592778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.552 I/O targets: 00:13:54.552 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:54.552 00:13:54.552 00:13:54.552 CUnit - A unit testing framework for C - Version 2.1-3 00:13:54.552 http://cunit.sourceforge.net/ 00:13:54.552 00:13:54.552 00:13:54.552 Suite: bdevio tests on: Nvme1n1 00:13:54.552 Test: blockdev write read block ...passed 00:13:54.552 Test: blockdev write zeroes read block ...passed 00:13:54.552 Test: blockdev write zeroes read no split ...passed 00:13:54.552 Test: blockdev write zeroes read split ...passed 00:13:54.552 Test: blockdev write zeroes read split partial ...passed 00:13:54.552 Test: blockdev reset ...[2024-10-29 11:02:59.848685] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:54.552 [2024-10-29 11:02:59.848815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100f430 (9): Bad file descriptor 00:13:54.552 [2024-10-29 11:02:59.867869] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resettipassed 00:13:54.552 Test: blockdev write read 8 blocks ...ng controller successful. 00:13:54.552 passed 00:13:54.552 Test: blockdev write read size > 128k ...passed 00:13:54.552 Test: blockdev write read invalid size ...passed 00:13:54.552 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:54.552 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:54.552 Test: blockdev write read max offset ...passed 00:13:54.552 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:54.552 Test: blockdev writev readv 8 blocks ...passed 00:13:54.552 Test: blockdev writev readv 30 x 1block ...passed 00:13:54.552 Test: blockdev writev readv block ...passed 00:13:54.552 Test: blockdev writev readv size > 128k ...passed 00:13:54.552 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:54.552 Test: blockdev comparev and writev ...[2024-10-29 11:02:59.876974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.552 [2024-10-29 11:02:59.877200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:54.552 [2024-10-29 11:02:59.877230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.552 [2024-10-29 11:02:59.877242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:54.552 [2024-10-29 11:02:59.877727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.552 [2024-10-29 11:02:59.877765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:54.552 [2024-10-29 11:02:59.877781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.552 [2024-10-29 11:02:59.877791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:54.552 [2024-10-29 11:02:59.878182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.553 [2024-10-29 11:02:59.878203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:54.553 [2024-10-29 11:02:59.878220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.553 [2024-10-29 11:02:59.878230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:54.553 [2024-10-29 11:02:59.878615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.553 [2024-10-29 11:02:59.878635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:54.553 [2024-10-29 11:02:59.878651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:54.553 [2024-10-29 11:02:59.878661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:54.553 passed 00:13:54.553 Test: blockdev nvme passthru rw ...passed 00:13:54.553 Test: blockdev nvme passthru vendor specific ...[2024-10-29 11:02:59.879814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:54.553 [2024-10-29 11:02:59.879854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:54.553 [2024-10-29 11:02:59.880073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOpassed 00:13:54.553 Test: blockdev nvme admin passthru ...CK OFFSET 0x0 len:0x0 00:13:54.553 [2024-10-29 11:02:59.880215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:54.553 [2024-10-29 11:02:59.880396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:54.553 [2024-10-29 11:02:59.880418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:54.553 [2024-10-29 11:02:59.880610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:54.553 [2024-10-29 11:02:59.880631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:54.553 passed 00:13:54.553 Test: blockdev copy ...passed 00:13:54.553 00:13:54.553 Run Summary: Type Total Ran Passed Failed Inactive 00:13:54.553 suites 1 1 n/a 0 0 00:13:54.553 tests 23 23 23 0 0 00:13:54.553 asserts 152 152 152 0 n/a 00:13:54.553 00:13:54.553 Elapsed time = 0.181 seconds 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:54.811 rmmod nvme_tcp 00:13:54.811 rmmod nvme_fabrics 00:13:54.811 rmmod nvme_keyring 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 83796 ']' 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 83796 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 83796 ']' 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 83796 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:54.811 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83796 00:13:55.068 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:13:55.068 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:13:55.068 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83796' 00:13:55.068 killing process with pid 83796 00:13:55.068 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 83796 00:13:55.068 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 83796 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:55.326 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:55.585 00:13:55.585 real 0m3.466s 00:13:55.585 user 0m10.239s 00:13:55.585 sys 0m1.368s 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:55.585 ************************************ 00:13:55.585 END TEST nvmf_bdevio_no_huge 00:13:55.585 ************************************ 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:55.585 11:03:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:55.585 ************************************ 00:13:55.585 START TEST nvmf_tls 00:13:55.585 ************************************ 00:13:55.585 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:55.585 * Looking for test storage... 00:13:55.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:55.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.845 --rc genhtml_branch_coverage=1 00:13:55.845 --rc genhtml_function_coverage=1 00:13:55.845 --rc genhtml_legend=1 00:13:55.845 --rc geninfo_all_blocks=1 00:13:55.845 --rc geninfo_unexecuted_blocks=1 00:13:55.845 00:13:55.845 ' 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:55.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.845 --rc genhtml_branch_coverage=1 00:13:55.845 --rc genhtml_function_coverage=1 00:13:55.845 --rc genhtml_legend=1 00:13:55.845 --rc geninfo_all_blocks=1 00:13:55.845 --rc geninfo_unexecuted_blocks=1 00:13:55.845 00:13:55.845 ' 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:55.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.845 --rc genhtml_branch_coverage=1 00:13:55.845 --rc genhtml_function_coverage=1 00:13:55.845 --rc genhtml_legend=1 00:13:55.845 --rc geninfo_all_blocks=1 00:13:55.845 --rc geninfo_unexecuted_blocks=1 00:13:55.845 00:13:55.845 ' 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:55.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:55.845 --rc genhtml_branch_coverage=1 00:13:55.845 --rc genhtml_function_coverage=1 00:13:55.845 --rc genhtml_legend=1 00:13:55.845 --rc geninfo_all_blocks=1 00:13:55.845 --rc geninfo_unexecuted_blocks=1 00:13:55.845 00:13:55.845 ' 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:13:55.845 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:55.846 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:55.846 Cannot find device "nvmf_init_br" 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:55.846 Cannot find device "nvmf_init_br2" 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:55.846 Cannot find device "nvmf_tgt_br" 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:55.846 Cannot find device "nvmf_tgt_br2" 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:55.846 Cannot find device "nvmf_init_br" 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:55.846 Cannot find device "nvmf_init_br2" 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:55.846 Cannot find device "nvmf_tgt_br" 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:55.846 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:55.846 Cannot find device "nvmf_tgt_br2" 00:13:55.847 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:55.847 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:55.847 Cannot find device "nvmf_br" 00:13:55.847 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:55.847 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:56.106 Cannot find device "nvmf_init_if" 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:56.106 Cannot find device "nvmf_init_if2" 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:56.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:56.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:56.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:56.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:56.106 00:13:56.106 --- 10.0.0.3 ping statistics --- 00:13:56.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.106 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:56.106 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:56.106 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:13:56.106 00:13:56.106 --- 10.0.0.4 ping statistics --- 00:13:56.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.106 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:56.106 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:56.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:56.365 00:13:56.365 --- 10.0.0.1 ping statistics --- 00:13:56.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.365 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:56.365 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:56.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:13:56.365 00:13:56.366 --- 10.0.0.2 ping statistics --- 00:13:56.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.366 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84076 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84076 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84076 ']' 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:56.366 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.366 [2024-10-29 11:03:01.708873] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:13:56.366 [2024-10-29 11:03:01.708991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.625 [2024-10-29 11:03:01.869220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.625 [2024-10-29 11:03:01.891803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.625 [2024-10-29 11:03:01.891862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.625 [2024-10-29 11:03:01.891875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.625 [2024-10-29 11:03:01.891885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.625 [2024-10-29 11:03:01.891893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.625 [2024-10-29 11:03:01.892252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.625 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:56.625 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:56.625 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:56.625 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.625 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.625 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.625 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:56.625 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:56.884 true 00:13:56.884 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:56.884 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:57.143 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:57.143 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:57.143 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:57.403 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:57.403 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:57.971 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:57.971 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:57.971 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:58.231 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:58.231 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:58.490 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:58.490 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:58.490 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:58.490 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:58.749 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:58.749 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:58.749 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:59.007 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:59.007 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:59.266 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:59.266 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:59.266 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:59.524 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:59.524 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.zrHpwTUfu4 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.knsncvYMXJ 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.zrHpwTUfu4 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.knsncvYMXJ 00:13:59.783 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:00.041 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:00.300 [2024-10-29 11:03:05.781957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:00.558 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.zrHpwTUfu4 00:14:00.558 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zrHpwTUfu4 00:14:00.558 11:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:00.558 [2024-10-29 11:03:06.054209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.816 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:00.816 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:01.074 [2024-10-29 11:03:06.526301] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:01.074 [2024-10-29 11:03:06.526495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:01.074 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:01.333 malloc0 00:14:01.333 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:01.592 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zrHpwTUfu4 00:14:01.851 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:02.110 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.zrHpwTUfu4 00:14:14.318 Initializing NVMe Controllers 00:14:14.318 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:14.318 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:14.318 Initialization complete. Launching workers. 00:14:14.318 ======================================================== 00:14:14.318 Latency(us) 00:14:14.318 Device Information : IOPS MiB/s Average min max 00:14:14.318 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10342.58 40.40 6189.16 1470.71 8853.64 00:14:14.318 ======================================================== 00:14:14.318 Total : 10342.58 40.40 6189.16 1470.71 8853.64 00:14:14.318 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zrHpwTUfu4 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zrHpwTUfu4 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84301 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84301 /var/tmp/bdevperf.sock 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84301 ']' 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.318 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.319 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.319 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.319 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.319 [2024-10-29 11:03:17.777194] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:14.319 [2024-10-29 11:03:17.777297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84301 ] 00:14:14.319 [2024-10-29 11:03:17.932406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.319 [2024-10-29 11:03:17.956426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.319 [2024-10-29 11:03:17.991322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.319 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:14.319 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:14.319 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zrHpwTUfu4 00:14:14.319 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:14.319 [2024-10-29 11:03:18.550055] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:14.319 TLSTESTn1 00:14:14.319 11:03:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:14.319 Running I/O for 10 seconds... 00:14:15.255 3634.00 IOPS, 14.20 MiB/s [2024-10-29T11:03:22.130Z] 3719.00 IOPS, 14.53 MiB/s [2024-10-29T11:03:23.068Z] 3722.33 IOPS, 14.54 MiB/s [2024-10-29T11:03:24.006Z] 3709.00 IOPS, 14.49 MiB/s [2024-10-29T11:03:24.984Z] 3798.40 IOPS, 14.84 MiB/s [2024-10-29T11:03:25.922Z] 3901.00 IOPS, 15.24 MiB/s [2024-10-29T11:03:26.858Z] 3952.43 IOPS, 15.44 MiB/s [2024-10-29T11:03:27.794Z] 3992.88 IOPS, 15.60 MiB/s [2024-10-29T11:03:29.171Z] 4027.22 IOPS, 15.73 MiB/s [2024-10-29T11:03:29.172Z] 4047.00 IOPS, 15.81 MiB/s 00:14:23.675 Latency(us) 00:14:23.675 [2024-10-29T11:03:29.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.675 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:23.675 Verification LBA range: start 0x0 length 0x2000 00:14:23.675 TLSTESTn1 : 10.02 4053.02 15.83 0.00 0.00 31525.52 5213.09 35270.28 00:14:23.675 [2024-10-29T11:03:29.172Z] =================================================================================================================== 00:14:23.675 [2024-10-29T11:03:29.172Z] Total : 4053.02 15.83 0.00 0.00 31525.52 5213.09 35270.28 00:14:23.675 { 00:14:23.675 "results": [ 00:14:23.675 { 00:14:23.675 "job": "TLSTESTn1", 00:14:23.675 "core_mask": "0x4", 00:14:23.675 "workload": "verify", 00:14:23.675 "status": "finished", 00:14:23.675 "verify_range": { 00:14:23.675 "start": 0, 00:14:23.675 "length": 8192 00:14:23.675 }, 00:14:23.675 "queue_depth": 128, 00:14:23.675 "io_size": 4096, 00:14:23.675 "runtime": 10.016228, 00:14:23.675 "iops": 4053.0227546737156, 00:14:23.675 "mibps": 15.832120135444201, 00:14:23.675 "io_failed": 0, 00:14:23.675 "io_timeout": 0, 00:14:23.675 "avg_latency_us": 31525.522084576176, 00:14:23.675 "min_latency_us": 5213.090909090909, 00:14:23.675 "max_latency_us": 35270.28363636364 00:14:23.675 } 00:14:23.675 ], 00:14:23.675 "core_count": 1 00:14:23.675 } 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84301 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84301 ']' 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84301 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84301 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84301' 00:14:23.675 killing process with pid 84301 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84301 00:14:23.675 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.675 00:14:23.675 Latency(us) 00:14:23.675 [2024-10-29T11:03:29.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.675 [2024-10-29T11:03:29.172Z] =================================================================================================================== 00:14:23.675 [2024-10-29T11:03:29.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84301 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.knsncvYMXJ 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.knsncvYMXJ 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.knsncvYMXJ 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.knsncvYMXJ 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84428 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84428 /var/tmp/bdevperf.sock 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84428 ']' 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:23.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:23.675 11:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.675 [2024-10-29 11:03:28.996386] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:23.675 [2024-10-29 11:03:28.996514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84428 ] 00:14:23.675 [2024-10-29 11:03:29.139365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.675 [2024-10-29 11:03:29.159334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.934 [2024-10-29 11:03:29.189484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:23.934 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:23.934 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:23.934 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.knsncvYMXJ 00:14:24.194 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:24.453 [2024-10-29 11:03:29.877285] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:24.453 [2024-10-29 11:03:29.883821] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:24.453 [2024-10-29 11:03:29.884660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fe210 (107): Transport endpoint is not connected 00:14:24.453 [2024-10-29 11:03:29.885652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fe210 (9): Bad file descriptor 00:14:24.453 [2024-10-29 11:03:29.886648] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:24.453 [2024-10-29 11:03:29.886668] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:24.453 [2024-10-29 11:03:29.886677] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:24.453 [2024-10-29 11:03:29.886690] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:24.453 request: 00:14:24.453 { 00:14:24.453 "name": "TLSTEST", 00:14:24.453 "trtype": "tcp", 00:14:24.453 "traddr": "10.0.0.3", 00:14:24.453 "adrfam": "ipv4", 00:14:24.453 "trsvcid": "4420", 00:14:24.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.453 "prchk_reftag": false, 00:14:24.453 "prchk_guard": false, 00:14:24.453 "hdgst": false, 00:14:24.453 "ddgst": false, 00:14:24.453 "psk": "key0", 00:14:24.453 "allow_unrecognized_csi": false, 00:14:24.453 "method": "bdev_nvme_attach_controller", 00:14:24.453 "req_id": 1 00:14:24.453 } 00:14:24.453 Got JSON-RPC error response 00:14:24.453 response: 00:14:24.453 { 00:14:24.453 "code": -5, 00:14:24.453 "message": "Input/output error" 00:14:24.453 } 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84428 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84428 ']' 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84428 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84428 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:24.453 killing process with pid 84428 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84428' 00:14:24.453 Received shutdown signal, test time was about 10.000000 seconds 00:14:24.453 00:14:24.453 Latency(us) 00:14:24.453 [2024-10-29T11:03:29.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.453 [2024-10-29T11:03:29.950Z] =================================================================================================================== 00:14:24.453 [2024-10-29T11:03:29.950Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84428 00:14:24.453 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84428 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zrHpwTUfu4 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zrHpwTUfu4 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zrHpwTUfu4 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zrHpwTUfu4 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84449 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84449 /var/tmp/bdevperf.sock 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84449 ']' 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:24.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:24.711 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.711 [2024-10-29 11:03:30.124467] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:24.711 [2024-10-29 11:03:30.124569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84449 ] 00:14:24.969 [2024-10-29 11:03:30.276084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.969 [2024-10-29 11:03:30.298626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.969 [2024-10-29 11:03:30.330551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.969 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:24.969 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:24.969 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zrHpwTUfu4 00:14:25.535 11:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:25.794 [2024-10-29 11:03:31.123731] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:25.794 [2024-10-29 11:03:31.133960] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:25.794 [2024-10-29 11:03:31.134011] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:25.794 [2024-10-29 11:03:31.134059] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:25.794 [2024-10-29 11:03:31.134349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ca210 (107): Transport endpoint is not connected 00:14:25.794 [2024-10-29 11:03:31.135340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ca210 (9): Bad file descriptor 00:14:25.794 [2024-10-29 11:03:31.136338] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:25.794 [2024-10-29 11:03:31.136377] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:25.794 [2024-10-29 11:03:31.136395] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:25.794 [2024-10-29 11:03:31.136409] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:25.794 request: 00:14:25.794 { 00:14:25.794 "name": "TLSTEST", 00:14:25.794 "trtype": "tcp", 00:14:25.794 "traddr": "10.0.0.3", 00:14:25.794 "adrfam": "ipv4", 00:14:25.794 "trsvcid": "4420", 00:14:25.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.794 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:25.794 "prchk_reftag": false, 00:14:25.794 "prchk_guard": false, 00:14:25.794 "hdgst": false, 00:14:25.794 "ddgst": false, 00:14:25.794 "psk": "key0", 00:14:25.794 "allow_unrecognized_csi": false, 00:14:25.794 "method": "bdev_nvme_attach_controller", 00:14:25.794 "req_id": 1 00:14:25.794 } 00:14:25.794 Got JSON-RPC error response 00:14:25.794 response: 00:14:25.794 { 00:14:25.794 "code": -5, 00:14:25.794 "message": "Input/output error" 00:14:25.794 } 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84449 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84449 ']' 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84449 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84449 00:14:25.794 killing process with pid 84449 00:14:25.794 Received shutdown signal, test time was about 10.000000 seconds 00:14:25.794 00:14:25.794 Latency(us) 00:14:25.794 [2024-10-29T11:03:31.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.794 [2024-10-29T11:03:31.291Z] =================================================================================================================== 00:14:25.794 [2024-10-29T11:03:31.291Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84449' 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84449 00:14:25.794 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84449 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zrHpwTUfu4 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zrHpwTUfu4 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:26.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zrHpwTUfu4 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zrHpwTUfu4 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84476 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84476 /var/tmp/bdevperf.sock 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84476 ']' 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:26.053 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.053 [2024-10-29 11:03:31.365285] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:26.053 [2024-10-29 11:03:31.365576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84476 ] 00:14:26.053 [2024-10-29 11:03:31.509364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.053 [2024-10-29 11:03:31.528794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.311 [2024-10-29 11:03:31.557757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.311 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:26.311 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:26.311 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zrHpwTUfu4 00:14:26.570 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:26.829 [2024-10-29 11:03:32.221363] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:26.829 [2024-10-29 11:03:32.226796] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:26.829 [2024-10-29 11:03:32.227075] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:26.829 [2024-10-29 11:03:32.227283] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:26.829 [2024-10-29 11:03:32.227495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204b210 (107): Transport endpoint is not connected 00:14:26.829 [2024-10-29 11:03:32.228485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x204b210 (9): Bad file descriptor 00:14:26.829 [2024-10-29 11:03:32.229480] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:26.829 [2024-10-29 11:03:32.229617] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:26.829 [2024-10-29 11:03:32.229677] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:26.829 [2024-10-29 11:03:32.229869] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:26.829 request: 00:14:26.829 { 00:14:26.829 "name": "TLSTEST", 00:14:26.829 "trtype": "tcp", 00:14:26.829 "traddr": "10.0.0.3", 00:14:26.829 "adrfam": "ipv4", 00:14:26.829 "trsvcid": "4420", 00:14:26.829 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:26.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.829 "prchk_reftag": false, 00:14:26.829 "prchk_guard": false, 00:14:26.829 "hdgst": false, 00:14:26.829 "ddgst": false, 00:14:26.829 "psk": "key0", 00:14:26.829 "allow_unrecognized_csi": false, 00:14:26.829 "method": "bdev_nvme_attach_controller", 00:14:26.829 "req_id": 1 00:14:26.829 } 00:14:26.829 Got JSON-RPC error response 00:14:26.829 response: 00:14:26.829 { 00:14:26.829 "code": -5, 00:14:26.829 "message": "Input/output error" 00:14:26.829 } 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84476 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84476 ']' 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84476 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84476 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:26.829 killing process with pid 84476 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84476' 00:14:26.829 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.829 00:14:26.829 Latency(us) 00:14:26.829 [2024-10-29T11:03:32.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.829 [2024-10-29T11:03:32.326Z] =================================================================================================================== 00:14:26.829 [2024-10-29T11:03:32.326Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84476 00:14:26.829 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84476 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:27.088 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.089 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84497 00:14:27.089 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:27.089 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:27.089 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84497 /var/tmp/bdevperf.sock 00:14:27.089 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84497 ']' 00:14:27.089 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.089 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:27.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.089 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.089 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:27.089 11:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.089 [2024-10-29 11:03:32.480396] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:27.089 [2024-10-29 11:03:32.480508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84497 ] 00:14:27.464 [2024-10-29 11:03:32.625145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.464 [2024-10-29 11:03:32.644011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.464 [2024-10-29 11:03:32.672512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:28.032 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:28.032 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:28.032 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:28.291 [2024-10-29 11:03:33.684027] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:28.291 [2024-10-29 11:03:33.684108] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:28.291 request: 00:14:28.291 { 00:14:28.291 "name": "key0", 00:14:28.291 "path": "", 00:14:28.291 "method": "keyring_file_add_key", 00:14:28.291 "req_id": 1 00:14:28.291 } 00:14:28.291 Got JSON-RPC error response 00:14:28.291 response: 00:14:28.291 { 00:14:28.291 "code": -1, 00:14:28.291 "message": "Operation not permitted" 00:14:28.291 } 00:14:28.291 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:28.550 [2024-10-29 11:03:33.948225] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.550 [2024-10-29 11:03:33.948285] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:28.550 request: 00:14:28.550 { 00:14:28.550 "name": "TLSTEST", 00:14:28.550 "trtype": "tcp", 00:14:28.550 "traddr": "10.0.0.3", 00:14:28.550 "adrfam": "ipv4", 00:14:28.550 "trsvcid": "4420", 00:14:28.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:28.550 "prchk_reftag": false, 00:14:28.550 "prchk_guard": false, 00:14:28.550 "hdgst": false, 00:14:28.550 "ddgst": false, 00:14:28.550 "psk": "key0", 00:14:28.550 "allow_unrecognized_csi": false, 00:14:28.550 "method": "bdev_nvme_attach_controller", 00:14:28.550 "req_id": 1 00:14:28.550 } 00:14:28.550 Got JSON-RPC error response 00:14:28.550 response: 00:14:28.550 { 00:14:28.550 "code": -126, 00:14:28.550 "message": "Required key not available" 00:14:28.550 } 00:14:28.550 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84497 00:14:28.550 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84497 ']' 00:14:28.550 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84497 00:14:28.550 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:28.550 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.550 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84497 00:14:28.550 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:28.550 killing process with pid 84497 00:14:28.550 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:28.550 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84497' 00:14:28.550 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84497 00:14:28.550 Received shutdown signal, test time was about 10.000000 seconds 00:14:28.550 00:14:28.550 Latency(us) 00:14:28.550 [2024-10-29T11:03:34.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.551 [2024-10-29T11:03:34.048Z] =================================================================================================================== 00:14:28.551 [2024-10-29T11:03:34.048Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:28.551 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84497 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 84076 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84076 ']' 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84076 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84076 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:28.810 killing process with pid 84076 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84076' 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84076 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84076 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:28.810 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:29.069 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:29.069 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:29.069 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.F3JpIVjEy0 00:14:29.069 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.F3JpIVjEy0 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84541 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84541 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84541 ']' 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:29.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:29.070 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.070 [2024-10-29 11:03:34.394945] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:29.070 [2024-10-29 11:03:34.395048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.070 [2024-10-29 11:03:34.537103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.070 [2024-10-29 11:03:34.556192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.070 [2024-10-29 11:03:34.556251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.070 [2024-10-29 11:03:34.556263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.070 [2024-10-29 11:03:34.556271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.070 [2024-10-29 11:03:34.556278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.070 [2024-10-29 11:03:34.556666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.329 [2024-10-29 11:03:34.585589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:29.329 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:29.329 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:29.329 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:29.329 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:29.329 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.329 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.329 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.F3JpIVjEy0 00:14:29.329 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.F3JpIVjEy0 00:14:29.329 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:29.588 [2024-10-29 11:03:34.945474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.588 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:29.847 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:30.106 [2024-10-29 11:03:35.417624] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:30.106 [2024-10-29 11:03:35.417846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:30.106 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:30.364 malloc0 00:14:30.364 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:30.623 11:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.F3JpIVjEy0 00:14:30.881 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F3JpIVjEy0 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.F3JpIVjEy0 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84589 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84589 /var/tmp/bdevperf.sock 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84589 ']' 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:31.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:31.140 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.140 [2024-10-29 11:03:36.498890] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:31.140 [2024-10-29 11:03:36.498988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84589 ] 00:14:31.399 [2024-10-29 11:03:36.651187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.399 [2024-10-29 11:03:36.675577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.399 [2024-10-29 11:03:36.709748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.399 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:31.399 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:31.399 11:03:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F3JpIVjEy0 00:14:31.658 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:31.917 [2024-10-29 11:03:37.239600] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:31.917 TLSTESTn1 00:14:31.918 11:03:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:32.177 Running I/O for 10 seconds... 00:14:34.052 4267.00 IOPS, 16.67 MiB/s [2024-10-29T11:03:40.496Z] 4293.00 IOPS, 16.77 MiB/s [2024-10-29T11:03:41.872Z] 4294.33 IOPS, 16.77 MiB/s [2024-10-29T11:03:42.440Z] 4307.00 IOPS, 16.82 MiB/s [2024-10-29T11:03:43.819Z] 4320.40 IOPS, 16.88 MiB/s [2024-10-29T11:03:44.755Z] 4308.50 IOPS, 16.83 MiB/s [2024-10-29T11:03:45.765Z] 4300.86 IOPS, 16.80 MiB/s [2024-10-29T11:03:46.701Z] 4311.00 IOPS, 16.84 MiB/s [2024-10-29T11:03:47.637Z] 4304.11 IOPS, 16.81 MiB/s [2024-10-29T11:03:47.637Z] 4261.20 IOPS, 16.65 MiB/s 00:14:42.140 Latency(us) 00:14:42.140 [2024-10-29T11:03:47.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.140 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:42.140 Verification LBA range: start 0x0 length 0x2000 00:14:42.140 TLSTESTn1 : 10.02 4266.62 16.67 0.00 0.00 29946.44 5719.51 25141.99 00:14:42.140 [2024-10-29T11:03:47.637Z] =================================================================================================================== 00:14:42.140 [2024-10-29T11:03:47.637Z] Total : 4266.62 16.67 0.00 0.00 29946.44 5719.51 25141.99 00:14:42.140 { 00:14:42.140 "results": [ 00:14:42.140 { 00:14:42.140 "job": "TLSTESTn1", 00:14:42.140 "core_mask": "0x4", 00:14:42.140 "workload": "verify", 00:14:42.140 "status": "finished", 00:14:42.140 "verify_range": { 00:14:42.140 "start": 0, 00:14:42.140 "length": 8192 00:14:42.140 }, 00:14:42.140 "queue_depth": 128, 00:14:42.140 "io_size": 4096, 00:14:42.140 "runtime": 10.01707, 00:14:42.140 "iops": 4266.616884977344, 00:14:42.140 "mibps": 16.66647220694275, 00:14:42.140 "io_failed": 0, 00:14:42.140 "io_timeout": 0, 00:14:42.140 "avg_latency_us": 29946.439397867394, 00:14:42.140 "min_latency_us": 5719.505454545455, 00:14:42.140 "max_latency_us": 25141.992727272725 00:14:42.140 } 00:14:42.140 ], 00:14:42.140 "core_count": 1 00:14:42.140 } 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84589 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84589 ']' 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84589 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84589 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:42.140 killing process with pid 84589 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84589' 00:14:42.140 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.140 00:14:42.140 Latency(us) 00:14:42.140 [2024-10-29T11:03:47.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.140 [2024-10-29T11:03:47.637Z] =================================================================================================================== 00:14:42.140 [2024-10-29T11:03:47.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84589 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84589 00:14:42.140 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.F3JpIVjEy0 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F3JpIVjEy0 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F3JpIVjEy0 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F3JpIVjEy0 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.F3JpIVjEy0 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84715 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84715 /var/tmp/bdevperf.sock 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84715 ']' 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:42.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:42.400 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.400 [2024-10-29 11:03:47.694662] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:42.400 [2024-10-29 11:03:47.694749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84715 ] 00:14:42.400 [2024-10-29 11:03:47.847862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.400 [2024-10-29 11:03:47.872496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.660 [2024-10-29 11:03:47.906275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:42.660 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:42.660 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:42.660 11:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F3JpIVjEy0 00:14:42.919 [2024-10-29 11:03:48.288177] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.F3JpIVjEy0': 0100666 00:14:42.919 [2024-10-29 11:03:48.288229] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:42.919 request: 00:14:42.919 { 00:14:42.919 "name": "key0", 00:14:42.919 "path": "/tmp/tmp.F3JpIVjEy0", 00:14:42.919 "method": "keyring_file_add_key", 00:14:42.919 "req_id": 1 00:14:42.919 } 00:14:42.919 Got JSON-RPC error response 00:14:42.919 response: 00:14:42.919 { 00:14:42.919 "code": -1, 00:14:42.919 "message": "Operation not permitted" 00:14:42.919 } 00:14:42.919 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:43.178 [2024-10-29 11:03:48.564343] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.179 [2024-10-29 11:03:48.564466] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:43.179 request: 00:14:43.179 { 00:14:43.179 "name": "TLSTEST", 00:14:43.179 "trtype": "tcp", 00:14:43.179 "traddr": "10.0.0.3", 00:14:43.179 "adrfam": "ipv4", 00:14:43.179 "trsvcid": "4420", 00:14:43.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.179 "prchk_reftag": false, 00:14:43.179 "prchk_guard": false, 00:14:43.179 "hdgst": false, 00:14:43.179 "ddgst": false, 00:14:43.179 "psk": "key0", 00:14:43.179 "allow_unrecognized_csi": false, 00:14:43.179 "method": "bdev_nvme_attach_controller", 00:14:43.179 "req_id": 1 00:14:43.179 } 00:14:43.179 Got JSON-RPC error response 00:14:43.179 response: 00:14:43.179 { 00:14:43.179 "code": -126, 00:14:43.179 "message": "Required key not available" 00:14:43.179 } 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84715 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84715 ']' 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84715 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84715 00:14:43.179 killing process with pid 84715 00:14:43.179 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.179 00:14:43.179 Latency(us) 00:14:43.179 [2024-10-29T11:03:48.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.179 [2024-10-29T11:03:48.676Z] =================================================================================================================== 00:14:43.179 [2024-10-29T11:03:48.676Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84715' 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84715 00:14:43.179 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84715 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 84541 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84541 ']' 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84541 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84541 00:14:43.438 killing process with pid 84541 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84541' 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84541 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84541 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84743 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84743 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84743 ']' 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:43.438 11:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.698 [2024-10-29 11:03:48.979084] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:43.698 [2024-10-29 11:03:48.979209] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.698 [2024-10-29 11:03:49.133132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.698 [2024-10-29 11:03:49.157822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.698 [2024-10-29 11:03:49.157903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.698 [2024-10-29 11:03:49.157921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.698 [2024-10-29 11:03:49.157935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.698 [2024-10-29 11:03:49.157948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.698 [2024-10-29 11:03:49.158338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.698 [2024-10-29 11:03:49.193509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.F3JpIVjEy0 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.F3JpIVjEy0 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.F3JpIVjEy0 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.F3JpIVjEy0 00:14:43.957 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:44.222 [2024-10-29 11:03:49.592791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.222 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:44.485 11:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:45.052 [2024-10-29 11:03:50.256987] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:45.052 [2024-10-29 11:03:50.257211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.052 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:45.311 malloc0 00:14:45.311 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:45.570 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.F3JpIVjEy0 00:14:45.851 [2024-10-29 11:03:51.236189] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.F3JpIVjEy0': 0100666 00:14:45.851 [2024-10-29 11:03:51.236235] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:45.851 request: 00:14:45.851 { 00:14:45.851 "name": "key0", 00:14:45.851 "path": "/tmp/tmp.F3JpIVjEy0", 00:14:45.851 "method": "keyring_file_add_key", 00:14:45.851 "req_id": 1 00:14:45.851 } 00:14:45.851 Got JSON-RPC error response 00:14:45.851 response: 00:14:45.851 { 00:14:45.851 "code": -1, 00:14:45.851 "message": "Operation not permitted" 00:14:45.851 } 00:14:45.851 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:46.110 [2024-10-29 11:03:51.500281] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:46.110 [2024-10-29 11:03:51.500412] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:46.110 request: 00:14:46.110 { 00:14:46.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.110 "host": "nqn.2016-06.io.spdk:host1", 00:14:46.110 "psk": "key0", 00:14:46.110 "method": "nvmf_subsystem_add_host", 00:14:46.110 "req_id": 1 00:14:46.110 } 00:14:46.110 Got JSON-RPC error response 00:14:46.110 response: 00:14:46.110 { 00:14:46.110 "code": -32603, 00:14:46.110 "message": "Internal error" 00:14:46.110 } 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 84743 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84743 ']' 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84743 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84743 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:46.110 killing process with pid 84743 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84743' 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84743 00:14:46.110 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84743 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.F3JpIVjEy0 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84799 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84799 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84799 ']' 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.368 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.368 [2024-10-29 11:03:51.763011] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:46.369 [2024-10-29 11:03:51.763107] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.627 [2024-10-29 11:03:51.904337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.627 [2024-10-29 11:03:51.922690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.627 [2024-10-29 11:03:51.922758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.627 [2024-10-29 11:03:51.922769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.627 [2024-10-29 11:03:51.922776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.627 [2024-10-29 11:03:51.922782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.627 [2024-10-29 11:03:51.923061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.627 [2024-10-29 11:03:51.951479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.627 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:46.627 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:46.627 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:46.627 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:46.627 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.627 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.627 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.F3JpIVjEy0 00:14:46.627 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.F3JpIVjEy0 00:14:46.627 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:46.886 [2024-10-29 11:03:52.323645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.886 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:47.145 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:47.404 [2024-10-29 11:03:52.859768] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:47.404 [2024-10-29 11:03:52.860004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:47.404 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:47.972 malloc0 00:14:47.972 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:47.972 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.F3JpIVjEy0 00:14:48.539 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:48.539 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84857 00:14:48.539 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:48.539 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.539 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84857 /var/tmp/bdevperf.sock 00:14:48.539 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84857 ']' 00:14:48.539 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.539 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:48.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.539 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.539 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:48.539 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.797 [2024-10-29 11:03:54.080988] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:48.797 [2024-10-29 11:03:54.081083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84857 ] 00:14:48.797 [2024-10-29 11:03:54.231017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.797 [2024-10-29 11:03:54.252208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.797 [2024-10-29 11:03:54.282168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.056 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:49.056 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:49.056 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F3JpIVjEy0 00:14:49.314 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:49.571 [2024-10-29 11:03:54.970069] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:49.571 TLSTESTn1 00:14:49.571 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:50.137 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:50.137 "subsystems": [ 00:14:50.137 { 00:14:50.137 "subsystem": "keyring", 00:14:50.137 "config": [ 00:14:50.137 { 00:14:50.137 "method": "keyring_file_add_key", 00:14:50.137 "params": { 00:14:50.137 "name": "key0", 00:14:50.137 "path": "/tmp/tmp.F3JpIVjEy0" 00:14:50.137 } 00:14:50.137 } 00:14:50.137 ] 00:14:50.137 }, 00:14:50.137 { 00:14:50.137 "subsystem": "iobuf", 00:14:50.137 "config": [ 00:14:50.137 { 00:14:50.137 "method": "iobuf_set_options", 00:14:50.137 "params": { 00:14:50.137 "small_pool_count": 8192, 00:14:50.137 "large_pool_count": 1024, 00:14:50.137 "small_bufsize": 8192, 00:14:50.137 "large_bufsize": 135168, 00:14:50.137 "enable_numa": false 00:14:50.137 } 00:14:50.137 } 00:14:50.137 ] 00:14:50.137 }, 00:14:50.137 { 00:14:50.137 "subsystem": "sock", 00:14:50.137 "config": [ 00:14:50.137 { 00:14:50.137 "method": "sock_set_default_impl", 00:14:50.137 "params": { 00:14:50.137 "impl_name": "uring" 00:14:50.137 } 00:14:50.137 }, 00:14:50.137 { 00:14:50.137 "method": "sock_impl_set_options", 00:14:50.137 "params": { 00:14:50.137 "impl_name": "ssl", 00:14:50.137 "recv_buf_size": 4096, 00:14:50.137 "send_buf_size": 4096, 00:14:50.137 "enable_recv_pipe": true, 00:14:50.137 "enable_quickack": false, 00:14:50.137 "enable_placement_id": 0, 00:14:50.137 "enable_zerocopy_send_server": true, 00:14:50.137 "enable_zerocopy_send_client": false, 00:14:50.137 "zerocopy_threshold": 0, 00:14:50.137 "tls_version": 0, 00:14:50.137 "enable_ktls": false 00:14:50.137 } 00:14:50.137 }, 00:14:50.137 { 00:14:50.137 "method": "sock_impl_set_options", 00:14:50.137 "params": { 00:14:50.137 "impl_name": "posix", 00:14:50.137 "recv_buf_size": 2097152, 00:14:50.137 "send_buf_size": 2097152, 00:14:50.137 "enable_recv_pipe": true, 00:14:50.137 "enable_quickack": false, 00:14:50.137 "enable_placement_id": 0, 00:14:50.137 "enable_zerocopy_send_server": true, 00:14:50.137 "enable_zerocopy_send_client": false, 00:14:50.137 "zerocopy_threshold": 0, 00:14:50.137 "tls_version": 0, 00:14:50.137 "enable_ktls": false 00:14:50.137 } 00:14:50.137 }, 00:14:50.137 { 00:14:50.137 "method": "sock_impl_set_options", 00:14:50.137 "params": { 00:14:50.137 "impl_name": "uring", 00:14:50.137 "recv_buf_size": 2097152, 00:14:50.137 "send_buf_size": 2097152, 00:14:50.137 "enable_recv_pipe": true, 00:14:50.137 "enable_quickack": false, 00:14:50.137 "enable_placement_id": 0, 00:14:50.137 "enable_zerocopy_send_server": false, 00:14:50.137 "enable_zerocopy_send_client": false, 00:14:50.137 "zerocopy_threshold": 0, 00:14:50.137 "tls_version": 0, 00:14:50.137 "enable_ktls": false 00:14:50.137 } 00:14:50.137 } 00:14:50.137 ] 00:14:50.137 }, 00:14:50.137 { 00:14:50.137 "subsystem": "vmd", 00:14:50.137 "config": [] 00:14:50.137 }, 00:14:50.137 { 00:14:50.137 "subsystem": "accel", 00:14:50.137 "config": [ 00:14:50.137 { 00:14:50.137 "method": "accel_set_options", 00:14:50.137 "params": { 00:14:50.137 "small_cache_size": 128, 00:14:50.138 "large_cache_size": 16, 00:14:50.138 "task_count": 2048, 00:14:50.138 "sequence_count": 2048, 00:14:50.138 "buf_count": 2048 00:14:50.138 } 00:14:50.138 } 00:14:50.138 ] 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "subsystem": "bdev", 00:14:50.138 "config": [ 00:14:50.138 { 00:14:50.138 "method": "bdev_set_options", 00:14:50.138 "params": { 00:14:50.138 "bdev_io_pool_size": 65535, 00:14:50.138 "bdev_io_cache_size": 256, 00:14:50.138 "bdev_auto_examine": true, 00:14:50.138 "iobuf_small_cache_size": 128, 00:14:50.138 "iobuf_large_cache_size": 16 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "bdev_raid_set_options", 00:14:50.138 "params": { 00:14:50.138 "process_window_size_kb": 1024, 00:14:50.138 "process_max_bandwidth_mb_sec": 0 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "bdev_iscsi_set_options", 00:14:50.138 "params": { 00:14:50.138 "timeout_sec": 30 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "bdev_nvme_set_options", 00:14:50.138 "params": { 00:14:50.138 "action_on_timeout": "none", 00:14:50.138 "timeout_us": 0, 00:14:50.138 "timeout_admin_us": 0, 00:14:50.138 "keep_alive_timeout_ms": 10000, 00:14:50.138 "arbitration_burst": 0, 00:14:50.138 "low_priority_weight": 0, 00:14:50.138 "medium_priority_weight": 0, 00:14:50.138 "high_priority_weight": 0, 00:14:50.138 "nvme_adminq_poll_period_us": 10000, 00:14:50.138 "nvme_ioq_poll_period_us": 0, 00:14:50.138 "io_queue_requests": 0, 00:14:50.138 "delay_cmd_submit": true, 00:14:50.138 "transport_retry_count": 4, 00:14:50.138 "bdev_retry_count": 3, 00:14:50.138 "transport_ack_timeout": 0, 00:14:50.138 "ctrlr_loss_timeout_sec": 0, 00:14:50.138 "reconnect_delay_sec": 0, 00:14:50.138 "fast_io_fail_timeout_sec": 0, 00:14:50.138 "disable_auto_failback": false, 00:14:50.138 "generate_uuids": false, 00:14:50.138 "transport_tos": 0, 00:14:50.138 "nvme_error_stat": false, 00:14:50.138 "rdma_srq_size": 0, 00:14:50.138 "io_path_stat": false, 00:14:50.138 "allow_accel_sequence": false, 00:14:50.138 "rdma_max_cq_size": 0, 00:14:50.138 "rdma_cm_event_timeout_ms": 0, 00:14:50.138 "dhchap_digests": [ 00:14:50.138 "sha256", 00:14:50.138 "sha384", 00:14:50.138 "sha512" 00:14:50.138 ], 00:14:50.138 "dhchap_dhgroups": [ 00:14:50.138 "null", 00:14:50.138 "ffdhe2048", 00:14:50.138 "ffdhe3072", 00:14:50.138 "ffdhe4096", 00:14:50.138 "ffdhe6144", 00:14:50.138 "ffdhe8192" 00:14:50.138 ] 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "bdev_nvme_set_hotplug", 00:14:50.138 "params": { 00:14:50.138 "period_us": 100000, 00:14:50.138 "enable": false 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "bdev_malloc_create", 00:14:50.138 "params": { 00:14:50.138 "name": "malloc0", 00:14:50.138 "num_blocks": 8192, 00:14:50.138 "block_size": 4096, 00:14:50.138 "physical_block_size": 4096, 00:14:50.138 "uuid": "daad3012-b979-4423-8198-598f13675edf", 00:14:50.138 "optimal_io_boundary": 0, 00:14:50.138 "md_size": 0, 00:14:50.138 "dif_type": 0, 00:14:50.138 "dif_is_head_of_md": false, 00:14:50.138 "dif_pi_format": 0 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "bdev_wait_for_examine" 00:14:50.138 } 00:14:50.138 ] 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "subsystem": "nbd", 00:14:50.138 "config": [] 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "subsystem": "scheduler", 00:14:50.138 "config": [ 00:14:50.138 { 00:14:50.138 "method": "framework_set_scheduler", 00:14:50.138 "params": { 00:14:50.138 "name": "static" 00:14:50.138 } 00:14:50.138 } 00:14:50.138 ] 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "subsystem": "nvmf", 00:14:50.138 "config": [ 00:14:50.138 { 00:14:50.138 "method": "nvmf_set_config", 00:14:50.138 "params": { 00:14:50.138 "discovery_filter": "match_any", 00:14:50.138 "admin_cmd_passthru": { 00:14:50.138 "identify_ctrlr": false 00:14:50.138 }, 00:14:50.138 "dhchap_digests": [ 00:14:50.138 "sha256", 00:14:50.138 "sha384", 00:14:50.138 "sha512" 00:14:50.138 ], 00:14:50.138 "dhchap_dhgroups": [ 00:14:50.138 "null", 00:14:50.138 "ffdhe2048", 00:14:50.138 "ffdhe3072", 00:14:50.138 "ffdhe4096", 00:14:50.138 "ffdhe6144", 00:14:50.138 "ffdhe8192" 00:14:50.138 ] 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "nvmf_set_max_subsystems", 00:14:50.138 "params": { 00:14:50.138 "max_subsystems": 1024 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "nvmf_set_crdt", 00:14:50.138 "params": { 00:14:50.138 "crdt1": 0, 00:14:50.138 "crdt2": 0, 00:14:50.138 "crdt3": 0 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "nvmf_create_transport", 00:14:50.138 "params": { 00:14:50.138 "trtype": "TCP", 00:14:50.138 "max_queue_depth": 128, 00:14:50.138 "max_io_qpairs_per_ctrlr": 127, 00:14:50.138 "in_capsule_data_size": 4096, 00:14:50.138 "max_io_size": 131072, 00:14:50.138 "io_unit_size": 131072, 00:14:50.138 "max_aq_depth": 128, 00:14:50.138 "num_shared_buffers": 511, 00:14:50.138 "buf_cache_size": 4294967295, 00:14:50.138 "dif_insert_or_strip": false, 00:14:50.138 "zcopy": false, 00:14:50.138 "c2h_success": false, 00:14:50.138 "sock_priority": 0, 00:14:50.138 "abort_timeout_sec": 1, 00:14:50.138 "ack_timeout": 0, 00:14:50.138 "data_wr_pool_size": 0 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "nvmf_create_subsystem", 00:14:50.138 "params": { 00:14:50.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.138 "allow_any_host": false, 00:14:50.138 "serial_number": "SPDK00000000000001", 00:14:50.138 "model_number": "SPDK bdev Controller", 00:14:50.138 "max_namespaces": 10, 00:14:50.138 "min_cntlid": 1, 00:14:50.138 "max_cntlid": 65519, 00:14:50.138 "ana_reporting": false 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "nvmf_subsystem_add_host", 00:14:50.138 "params": { 00:14:50.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.138 "host": "nqn.2016-06.io.spdk:host1", 00:14:50.138 "psk": "key0" 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "nvmf_subsystem_add_ns", 00:14:50.138 "params": { 00:14:50.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.138 "namespace": { 00:14:50.138 "nsid": 1, 00:14:50.138 "bdev_name": "malloc0", 00:14:50.138 "nguid": "DAAD3012B97944238198598F13675EDF", 00:14:50.138 "uuid": "daad3012-b979-4423-8198-598f13675edf", 00:14:50.138 "no_auto_visible": false 00:14:50.138 } 00:14:50.138 } 00:14:50.138 }, 00:14:50.138 { 00:14:50.138 "method": "nvmf_subsystem_add_listener", 00:14:50.138 "params": { 00:14:50.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.138 "listen_address": { 00:14:50.138 "trtype": "TCP", 00:14:50.138 "adrfam": "IPv4", 00:14:50.138 "traddr": "10.0.0.3", 00:14:50.138 "trsvcid": "4420" 00:14:50.138 }, 00:14:50.138 "secure_channel": true 00:14:50.138 } 00:14:50.138 } 00:14:50.138 ] 00:14:50.138 } 00:14:50.138 ] 00:14:50.138 }' 00:14:50.138 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:50.398 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:50.398 "subsystems": [ 00:14:50.398 { 00:14:50.398 "subsystem": "keyring", 00:14:50.398 "config": [ 00:14:50.398 { 00:14:50.398 "method": "keyring_file_add_key", 00:14:50.398 "params": { 00:14:50.398 "name": "key0", 00:14:50.398 "path": "/tmp/tmp.F3JpIVjEy0" 00:14:50.398 } 00:14:50.398 } 00:14:50.398 ] 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "subsystem": "iobuf", 00:14:50.398 "config": [ 00:14:50.398 { 00:14:50.398 "method": "iobuf_set_options", 00:14:50.398 "params": { 00:14:50.398 "small_pool_count": 8192, 00:14:50.398 "large_pool_count": 1024, 00:14:50.398 "small_bufsize": 8192, 00:14:50.398 "large_bufsize": 135168, 00:14:50.398 "enable_numa": false 00:14:50.398 } 00:14:50.398 } 00:14:50.398 ] 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "subsystem": "sock", 00:14:50.398 "config": [ 00:14:50.398 { 00:14:50.398 "method": "sock_set_default_impl", 00:14:50.398 "params": { 00:14:50.398 "impl_name": "uring" 00:14:50.398 } 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "method": "sock_impl_set_options", 00:14:50.398 "params": { 00:14:50.398 "impl_name": "ssl", 00:14:50.398 "recv_buf_size": 4096, 00:14:50.398 "send_buf_size": 4096, 00:14:50.398 "enable_recv_pipe": true, 00:14:50.398 "enable_quickack": false, 00:14:50.398 "enable_placement_id": 0, 00:14:50.398 "enable_zerocopy_send_server": true, 00:14:50.398 "enable_zerocopy_send_client": false, 00:14:50.398 "zerocopy_threshold": 0, 00:14:50.398 "tls_version": 0, 00:14:50.398 "enable_ktls": false 00:14:50.398 } 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "method": "sock_impl_set_options", 00:14:50.398 "params": { 00:14:50.398 "impl_name": "posix", 00:14:50.398 "recv_buf_size": 2097152, 00:14:50.398 "send_buf_size": 2097152, 00:14:50.398 "enable_recv_pipe": true, 00:14:50.398 "enable_quickack": false, 00:14:50.398 "enable_placement_id": 0, 00:14:50.398 "enable_zerocopy_send_server": true, 00:14:50.398 "enable_zerocopy_send_client": false, 00:14:50.398 "zerocopy_threshold": 0, 00:14:50.398 "tls_version": 0, 00:14:50.398 "enable_ktls": false 00:14:50.398 } 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "method": "sock_impl_set_options", 00:14:50.398 "params": { 00:14:50.398 "impl_name": "uring", 00:14:50.398 "recv_buf_size": 2097152, 00:14:50.398 "send_buf_size": 2097152, 00:14:50.398 "enable_recv_pipe": true, 00:14:50.398 "enable_quickack": false, 00:14:50.398 "enable_placement_id": 0, 00:14:50.398 "enable_zerocopy_send_server": false, 00:14:50.398 "enable_zerocopy_send_client": false, 00:14:50.398 "zerocopy_threshold": 0, 00:14:50.398 "tls_version": 0, 00:14:50.398 "enable_ktls": false 00:14:50.398 } 00:14:50.398 } 00:14:50.398 ] 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "subsystem": "vmd", 00:14:50.398 "config": [] 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "subsystem": "accel", 00:14:50.398 "config": [ 00:14:50.398 { 00:14:50.398 "method": "accel_set_options", 00:14:50.398 "params": { 00:14:50.398 "small_cache_size": 128, 00:14:50.398 "large_cache_size": 16, 00:14:50.398 "task_count": 2048, 00:14:50.398 "sequence_count": 2048, 00:14:50.398 "buf_count": 2048 00:14:50.398 } 00:14:50.398 } 00:14:50.398 ] 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "subsystem": "bdev", 00:14:50.398 "config": [ 00:14:50.398 { 00:14:50.398 "method": "bdev_set_options", 00:14:50.398 "params": { 00:14:50.398 "bdev_io_pool_size": 65535, 00:14:50.398 "bdev_io_cache_size": 256, 00:14:50.398 "bdev_auto_examine": true, 00:14:50.398 "iobuf_small_cache_size": 128, 00:14:50.398 "iobuf_large_cache_size": 16 00:14:50.398 } 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "method": "bdev_raid_set_options", 00:14:50.398 "params": { 00:14:50.398 "process_window_size_kb": 1024, 00:14:50.398 "process_max_bandwidth_mb_sec": 0 00:14:50.398 } 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "method": "bdev_iscsi_set_options", 00:14:50.398 "params": { 00:14:50.398 "timeout_sec": 30 00:14:50.398 } 00:14:50.398 }, 00:14:50.398 { 00:14:50.398 "method": "bdev_nvme_set_options", 00:14:50.398 "params": { 00:14:50.398 "action_on_timeout": "none", 00:14:50.398 "timeout_us": 0, 00:14:50.398 "timeout_admin_us": 0, 00:14:50.398 "keep_alive_timeout_ms": 10000, 00:14:50.398 "arbitration_burst": 0, 00:14:50.398 "low_priority_weight": 0, 00:14:50.398 "medium_priority_weight": 0, 00:14:50.398 "high_priority_weight": 0, 00:14:50.398 "nvme_adminq_poll_period_us": 10000, 00:14:50.398 "nvme_ioq_poll_period_us": 0, 00:14:50.398 "io_queue_requests": 512, 00:14:50.398 "delay_cmd_submit": true, 00:14:50.398 "transport_retry_count": 4, 00:14:50.398 "bdev_retry_count": 3, 00:14:50.398 "transport_ack_timeout": 0, 00:14:50.398 "ctrlr_loss_timeout_sec": 0, 00:14:50.398 "reconnect_delay_sec": 0, 00:14:50.398 "fast_io_fail_timeout_sec": 0, 00:14:50.398 "disable_auto_failback": false, 00:14:50.398 "generate_uuids": false, 00:14:50.398 "transport_tos": 0, 00:14:50.398 "nvme_error_stat": false, 00:14:50.398 "rdma_srq_size": 0, 00:14:50.398 "io_path_stat": false, 00:14:50.398 "allow_accel_sequence": false, 00:14:50.398 "rdma_max_cq_size": 0, 00:14:50.398 "rdma_cm_event_timeout_ms": 0, 00:14:50.398 "dhchap_digests": [ 00:14:50.398 "sha256", 00:14:50.398 "sha384", 00:14:50.398 "sha512" 00:14:50.399 ], 00:14:50.399 "dhchap_dhgroups": [ 00:14:50.399 "null", 00:14:50.399 "ffdhe2048", 00:14:50.399 "ffdhe3072", 00:14:50.399 "ffdhe4096", 00:14:50.399 "ffdhe6144", 00:14:50.399 "ffdhe8192" 00:14:50.399 ] 00:14:50.399 } 00:14:50.399 }, 00:14:50.399 { 00:14:50.399 "method": "bdev_nvme_attach_controller", 00:14:50.399 "params": { 00:14:50.399 "name": "TLSTEST", 00:14:50.399 "trtype": "TCP", 00:14:50.399 "adrfam": "IPv4", 00:14:50.399 "traddr": "10.0.0.3", 00:14:50.399 "trsvcid": "4420", 00:14:50.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.399 "prchk_reftag": false, 00:14:50.399 "prchk_guard": false, 00:14:50.399 "ctrlr_loss_timeout_sec": 0, 00:14:50.399 "reconnect_delay_sec": 0, 00:14:50.399 "fast_io_fail_timeout_sec": 0, 00:14:50.399 "psk": "key0", 00:14:50.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:50.399 "hdgst": false, 00:14:50.399 "ddgst": false, 00:14:50.399 "multipath": "multipath" 00:14:50.399 } 00:14:50.399 }, 00:14:50.399 { 00:14:50.399 "method": "bdev_nvme_set_hotplug", 00:14:50.399 "params": { 00:14:50.399 "period_us": 100000, 00:14:50.399 "enable": false 00:14:50.399 } 00:14:50.399 }, 00:14:50.399 { 00:14:50.399 "method": "bdev_wait_for_examine" 00:14:50.399 } 00:14:50.399 ] 00:14:50.399 }, 00:14:50.399 { 00:14:50.399 "subsystem": "nbd", 00:14:50.399 "config": [] 00:14:50.399 } 00:14:50.399 ] 00:14:50.399 }' 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84857 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84857 ']' 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84857 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84857 00:14:50.399 killing process with pid 84857 00:14:50.399 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.399 00:14:50.399 Latency(us) 00:14:50.399 [2024-10-29T11:03:55.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.399 [2024-10-29T11:03:55.896Z] =================================================================================================================== 00:14:50.399 [2024-10-29T11:03:55.896Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84857' 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84857 00:14:50.399 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84857 00:14:50.659 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84799 00:14:50.659 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84799 ']' 00:14:50.659 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84799 00:14:50.659 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:50.659 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:50.659 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84799 00:14:50.659 killing process with pid 84799 00:14:50.659 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:50.659 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:50.659 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84799' 00:14:50.659 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84799 00:14:50.659 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84799 00:14:50.659 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:50.659 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:50.659 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:50.659 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:50.659 "subsystems": [ 00:14:50.659 { 00:14:50.659 "subsystem": "keyring", 00:14:50.659 "config": [ 00:14:50.659 { 00:14:50.659 "method": "keyring_file_add_key", 00:14:50.659 "params": { 00:14:50.659 "name": "key0", 00:14:50.659 "path": "/tmp/tmp.F3JpIVjEy0" 00:14:50.659 } 00:14:50.659 } 00:14:50.659 ] 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "subsystem": "iobuf", 00:14:50.659 "config": [ 00:14:50.659 { 00:14:50.659 "method": "iobuf_set_options", 00:14:50.659 "params": { 00:14:50.659 "small_pool_count": 8192, 00:14:50.659 "large_pool_count": 1024, 00:14:50.659 "small_bufsize": 8192, 00:14:50.659 "large_bufsize": 135168, 00:14:50.659 "enable_numa": false 00:14:50.659 } 00:14:50.659 } 00:14:50.659 ] 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "subsystem": "sock", 00:14:50.659 "config": [ 00:14:50.659 { 00:14:50.659 "method": "sock_set_default_impl", 00:14:50.659 "params": { 00:14:50.659 "impl_name": "uring" 00:14:50.659 } 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "method": "sock_impl_set_options", 00:14:50.659 "params": { 00:14:50.659 "impl_name": "ssl", 00:14:50.659 "recv_buf_size": 4096, 00:14:50.659 "send_buf_size": 4096, 00:14:50.659 "enable_recv_pipe": true, 00:14:50.659 "enable_quickack": false, 00:14:50.659 "enable_placement_id": 0, 00:14:50.659 "enable_zerocopy_send_server": true, 00:14:50.659 "enable_zerocopy_send_client": false, 00:14:50.659 "zerocopy_threshold": 0, 00:14:50.659 "tls_version": 0, 00:14:50.659 "enable_ktls": false 00:14:50.659 } 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "method": "sock_impl_set_options", 00:14:50.659 "params": { 00:14:50.659 "impl_name": "posix", 00:14:50.659 "recv_buf_size": 2097152, 00:14:50.659 "send_buf_size": 2097152, 00:14:50.659 "enable_recv_pipe": true, 00:14:50.659 "enable_quickack": false, 00:14:50.659 "enable_placement_id": 0, 00:14:50.659 "enable_zerocopy_send_server": true, 00:14:50.659 "enable_zerocopy_send_client": false, 00:14:50.659 "zerocopy_threshold": 0, 00:14:50.659 "tls_version": 0, 00:14:50.659 "enable_ktls": false 00:14:50.659 } 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "method": "sock_impl_set_options", 00:14:50.659 "params": { 00:14:50.659 "impl_name": "uring", 00:14:50.659 "recv_buf_size": 2097152, 00:14:50.659 "send_buf_size": 2097152, 00:14:50.659 "enable_recv_pipe": true, 00:14:50.659 "enable_quickack": false, 00:14:50.659 "enable_placement_id": 0, 00:14:50.659 "enable_zerocopy_send_server": false, 00:14:50.659 "enable_zerocopy_send_client": false, 00:14:50.659 "zerocopy_threshold": 0, 00:14:50.659 "tls_version": 0, 00:14:50.659 "enable_ktls": false 00:14:50.659 } 00:14:50.659 } 00:14:50.659 ] 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "subsystem": "vmd", 00:14:50.659 "config": [] 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "subsystem": "accel", 00:14:50.659 "config": [ 00:14:50.659 { 00:14:50.659 "method": "accel_set_options", 00:14:50.659 "params": { 00:14:50.659 "small_cache_size": 128, 00:14:50.659 "large_cache_size": 16, 00:14:50.659 "task_count": 2048, 00:14:50.659 "sequence_count": 2048, 00:14:50.659 "buf_count": 2048 00:14:50.659 } 00:14:50.659 } 00:14:50.659 ] 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "subsystem": "bdev", 00:14:50.659 "config": [ 00:14:50.659 { 00:14:50.659 "method": "bdev_set_options", 00:14:50.659 "params": { 00:14:50.659 "bdev_io_pool_size": 65535, 00:14:50.659 "bdev_io_cache_size": 256, 00:14:50.659 "bdev_auto_examine": true, 00:14:50.659 "iobuf_small_cache_size": 128, 00:14:50.659 "iobuf_large_cache_size": 16 00:14:50.659 } 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "method": "bdev_raid_set_options", 00:14:50.659 "params": { 00:14:50.659 "process_window_size_kb": 1024, 00:14:50.659 "process_max_bandwidth_mb_sec": 0 00:14:50.659 } 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "method": "bdev_iscsi_set_options", 00:14:50.659 "params": { 00:14:50.659 "timeout_sec": 30 00:14:50.659 } 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "method": "bdev_nvme_set_options", 00:14:50.659 "params": { 00:14:50.659 "action_on_timeout": "none", 00:14:50.659 "timeout_us": 0, 00:14:50.659 "timeout_admin_us": 0, 00:14:50.659 "keep_alive_timeout_ms": 10000, 00:14:50.659 "arbitration_burst": 0, 00:14:50.659 "low_priority_weight": 0, 00:14:50.659 "medium_priority_weight": 0, 00:14:50.659 "high_priority_weight": 0, 00:14:50.659 "nvme_adminq_poll_period_us": 10000, 00:14:50.659 "nvme_ioq_poll_period_us": 0, 00:14:50.659 "io_queue_requests": 0, 00:14:50.659 "delay_cmd_submit": true, 00:14:50.659 "transport_retry_count": 4, 00:14:50.659 "bdev_retry_count": 3, 00:14:50.659 "transport_ack_timeout": 0, 00:14:50.659 "ctrlr_loss_timeout_sec": 0, 00:14:50.659 "reconnect_delay_sec": 0, 00:14:50.659 "fast_io_fail_timeout_sec": 0, 00:14:50.659 "disable_auto_failback": false, 00:14:50.659 "generate_uuids": false, 00:14:50.659 "transport_tos": 0, 00:14:50.659 "nvme_error_stat": false, 00:14:50.659 "rdma_srq_size": 0, 00:14:50.659 "io_path_stat": false, 00:14:50.659 "allow_accel_sequence": false, 00:14:50.659 "rdma_max_cq_size": 0, 00:14:50.659 "rdma_cm_event_timeout_ms": 0, 00:14:50.659 "dhchap_digests": [ 00:14:50.659 "sha256", 00:14:50.659 "sha384", 00:14:50.659 "sha512" 00:14:50.659 ], 00:14:50.659 "dhchap_dhgroups": [ 00:14:50.659 "null", 00:14:50.659 "ffdhe2048", 00:14:50.659 "ffdhe3072", 00:14:50.659 "ffdhe4096", 00:14:50.659 "ffdhe6144", 00:14:50.659 "ffdhe8192" 00:14:50.659 ] 00:14:50.659 } 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "method": "bdev_nvme_set_hotplug", 00:14:50.659 "params": { 00:14:50.659 "period_us": 100000, 00:14:50.659 "enable": false 00:14:50.659 } 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "method": "bdev_malloc_create", 00:14:50.659 "params": { 00:14:50.659 "name": "malloc0", 00:14:50.659 "num_blocks": 8192, 00:14:50.659 "block_size": 4096, 00:14:50.659 "physical_block_size": 4096, 00:14:50.659 "uuid": "daad3012-b979-4423-8198-598f13675edf", 00:14:50.659 "optimal_io_boundary": 0, 00:14:50.659 "md_size": 0, 00:14:50.659 "dif_type": 0, 00:14:50.659 "dif_is_head_of_md": false, 00:14:50.659 "dif_pi_format": 0 00:14:50.659 } 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "method": "bdev_wait_for_examine" 00:14:50.659 } 00:14:50.659 ] 00:14:50.659 }, 00:14:50.659 { 00:14:50.659 "subsystem": "nbd", 00:14:50.659 "config": [] 00:14:50.660 }, 00:14:50.660 { 00:14:50.660 "subsystem": "scheduler", 00:14:50.660 "config": [ 00:14:50.660 { 00:14:50.660 "method": "framework_set_scheduler", 00:14:50.660 "params": { 00:14:50.660 "name": "static" 00:14:50.660 } 00:14:50.660 } 00:14:50.660 ] 00:14:50.660 }, 00:14:50.660 { 00:14:50.660 "subsystem": "nvmf", 00:14:50.660 "config": [ 00:14:50.660 { 00:14:50.660 "method": "nvmf_set_config", 00:14:50.660 "params": { 00:14:50.660 "discovery_filter": "match_any", 00:14:50.660 "admin_cmd_passthru": { 00:14:50.660 "identify_ctrlr": false 00:14:50.660 }, 00:14:50.660 "dhchap_digests": [ 00:14:50.660 "sha256", 00:14:50.660 "sha384", 00:14:50.660 "sha512" 00:14:50.660 ], 00:14:50.660 "dhchap_dhgroups": [ 00:14:50.660 "null", 00:14:50.660 "ffdhe2048", 00:14:50.660 "ffdhe3072", 00:14:50.660 "ffdhe4096", 00:14:50.660 "ffdhe6144", 00:14:50.660 "ffdhe8192" 00:14:50.660 ] 00:14:50.660 } 00:14:50.660 }, 00:14:50.660 { 00:14:50.660 "method": "nvmf_set_max_subsystems", 00:14:50.660 "params": { 00:14:50.660 "max_subsystems": 1024 00:14:50.660 } 00:14:50.660 }, 00:14:50.660 { 00:14:50.660 "method": "nvmf_set_crdt", 00:14:50.660 "params": { 00:14:50.660 "crdt1": 0, 00:14:50.660 "crdt2": 0, 00:14:50.660 "crdt3": 0 00:14:50.660 } 00:14:50.660 }, 00:14:50.660 { 00:14:50.660 "method": "nvmf_create_transport", 00:14:50.660 "params": { 00:14:50.660 "trtype": "TCP", 00:14:50.660 "max_queue_depth": 128, 00:14:50.660 "max_io_qpairs_per_ctrlr": 127, 00:14:50.660 "in_capsule_data_size": 4096, 00:14:50.660 "max_io_size": 131072, 00:14:50.660 "io_unit_size": 131072, 00:14:50.660 "max_aq_depth": 128, 00:14:50.660 "num_shared_buffers": 511, 00:14:50.660 "buf_cache_size": 4294967295, 00:14:50.660 "dif_insert_or_strip": false, 00:14:50.660 "zcopy": false, 00:14:50.660 "c2h_success": false, 00:14:50.660 "sock_priority": 0, 00:14:50.660 "abort_timeout_sec": 1, 00:14:50.660 "ack_timeout": 0, 00:14:50.660 "data_wr_pool_size": 0 00:14:50.660 } 00:14:50.660 }, 00:14:50.660 { 00:14:50.660 "method": "nvmf_create_subsystem", 00:14:50.660 "params": { 00:14:50.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.660 "allow_any_host": false, 00:14:50.660 "serial_number": "SPDK00000000000001", 00:14:50.660 "model_number": "SPDK bdev Controller", 00:14:50.660 "max_namespaces": 10, 00:14:50.660 "min_cntlid": 1, 00:14:50.660 "max_cntlid": 65519, 00:14:50.660 "ana_reporting": false 00:14:50.660 } 00:14:50.660 }, 00:14:50.660 { 00:14:50.660 "method": "nvmf_subsystem_add_host", 00:14:50.660 "params": { 00:14:50.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.660 "host": "nqn.2016-06.io.spdk:host1", 00:14:50.660 "psk": "key0" 00:14:50.660 } 00:14:50.660 }, 00:14:50.660 { 00:14:50.660 "method": "nvmf_subsystem_add_ns", 00:14:50.660 "params": { 00:14:50.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.660 "namespace": { 00:14:50.660 "nsid": 1, 00:14:50.660 "bdev_name": "malloc0", 00:14:50.660 "nguid": "DAAD3012B97944238198598F13675EDF", 00:14:50.660 "uuid": "daad3012-b979-4423-8198-598f13675edf", 00:14:50.660 "no_auto_visible": false 00:14:50.660 } 00:14:50.660 } 00:14:50.660 }, 00:14:50.660 { 00:14:50.660 "method": "nvmf_subsystem_add_listener", 00:14:50.660 "params": { 00:14:50.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.660 "listen_address": { 00:14:50.660 "trtype": "TCP", 00:14:50.660 "adrfam": "IPv4", 00:14:50.660 "traddr": "10.0.0.3", 00:14:50.660 "trsvcid": "4420" 00:14:50.660 }, 00:14:50.660 "secure_channel": true 00:14:50.660 } 00:14:50.660 } 00:14:50.660 ] 00:14:50.660 } 00:14:50.660 ] 00:14:50.660 }' 00:14:50.660 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.919 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84899 00:14:50.919 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84899 00:14:50.919 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84899 ']' 00:14:50.919 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.919 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:50.919 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.919 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:50.919 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:50.919 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.919 [2024-10-29 11:03:56.226031] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:50.920 [2024-10-29 11:03:56.226164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.920 [2024-10-29 11:03:56.371980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.920 [2024-10-29 11:03:56.389954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.920 [2024-10-29 11:03:56.390023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.920 [2024-10-29 11:03:56.390048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.920 [2024-10-29 11:03:56.390055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.920 [2024-10-29 11:03:56.390062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.920 [2024-10-29 11:03:56.390402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.178 [2024-10-29 11:03:56.533500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.178 [2024-10-29 11:03:56.587684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.178 [2024-10-29 11:03:56.619650] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:51.178 [2024-10-29 11:03:56.619885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84931 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84931 /var/tmp/bdevperf.sock 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 84931 ']' 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:52.114 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:52.114 "subsystems": [ 00:14:52.114 { 00:14:52.114 "subsystem": "keyring", 00:14:52.114 "config": [ 00:14:52.114 { 00:14:52.114 "method": "keyring_file_add_key", 00:14:52.114 "params": { 00:14:52.114 "name": "key0", 00:14:52.114 "path": "/tmp/tmp.F3JpIVjEy0" 00:14:52.114 } 00:14:52.114 } 00:14:52.114 ] 00:14:52.114 }, 00:14:52.114 { 00:14:52.114 "subsystem": "iobuf", 00:14:52.114 "config": [ 00:14:52.114 { 00:14:52.114 "method": "iobuf_set_options", 00:14:52.114 "params": { 00:14:52.114 "small_pool_count": 8192, 00:14:52.114 "large_pool_count": 1024, 00:14:52.114 "small_bufsize": 8192, 00:14:52.114 "large_bufsize": 135168, 00:14:52.114 "enable_numa": false 00:14:52.114 } 00:14:52.114 } 00:14:52.114 ] 00:14:52.114 }, 00:14:52.114 { 00:14:52.114 "subsystem": "sock", 00:14:52.114 "config": [ 00:14:52.114 { 00:14:52.114 "method": "sock_set_default_impl", 00:14:52.114 "params": { 00:14:52.114 "impl_name": "uring" 00:14:52.114 } 00:14:52.114 }, 00:14:52.114 { 00:14:52.114 "method": "sock_impl_set_options", 00:14:52.115 "params": { 00:14:52.115 "impl_name": "ssl", 00:14:52.115 "recv_buf_size": 4096, 00:14:52.115 "send_buf_size": 4096, 00:14:52.115 "enable_recv_pipe": true, 00:14:52.115 "enable_quickack": false, 00:14:52.115 "enable_placement_id": 0, 00:14:52.115 "enable_zerocopy_send_server": true, 00:14:52.115 "enable_zerocopy_send_client": false, 00:14:52.115 "zerocopy_threshold": 0, 00:14:52.115 "tls_version": 0, 00:14:52.115 "enable_ktls": false 00:14:52.115 } 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "method": "sock_impl_set_options", 00:14:52.115 "params": { 00:14:52.115 "impl_name": "posix", 00:14:52.115 "recv_buf_size": 2097152, 00:14:52.115 "send_buf_size": 2097152, 00:14:52.115 "enable_recv_pipe": true, 00:14:52.115 "enable_quickack": false, 00:14:52.115 "enable_placement_id": 0, 00:14:52.115 "enable_zerocopy_send_server": true, 00:14:52.115 "enable_zerocopy_send_client": false, 00:14:52.115 "zerocopy_threshold": 0, 00:14:52.115 "tls_version": 0, 00:14:52.115 "enable_ktls": false 00:14:52.115 } 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "method": "sock_impl_set_options", 00:14:52.115 "params": { 00:14:52.115 "impl_name": "uring", 00:14:52.115 "recv_buf_size": 2097152, 00:14:52.115 "send_buf_size": 2097152, 00:14:52.115 "enable_recv_pipe": true, 00:14:52.115 "enable_quickack": false, 00:14:52.115 "enable_placement_id": 0, 00:14:52.115 "enable_zerocopy_send_server": false, 00:14:52.115 "enable_zerocopy_send_client": false, 00:14:52.115 "zerocopy_threshold": 0, 00:14:52.115 "tls_version": 0, 00:14:52.115 "enable_ktls": false 00:14:52.115 } 00:14:52.115 } 00:14:52.115 ] 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "subsystem": "vmd", 00:14:52.115 "config": [] 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "subsystem": "accel", 00:14:52.115 "config": [ 00:14:52.115 { 00:14:52.115 "method": "accel_set_options", 00:14:52.115 "params": { 00:14:52.115 "small_cache_size": 128, 00:14:52.115 "large_cache_size": 16, 00:14:52.115 "task_count": 2048, 00:14:52.115 "sequence_count": 2048, 00:14:52.115 "buf_count": 2048 00:14:52.115 } 00:14:52.115 } 00:14:52.115 ] 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "subsystem": "bdev", 00:14:52.115 "config": [ 00:14:52.115 { 00:14:52.115 "method": "bdev_set_options", 00:14:52.115 "params": { 00:14:52.115 "bdev_io_pool_size": 65535, 00:14:52.115 "bdev_io_cache_size": 256, 00:14:52.115 "bdev_auto_examine": true, 00:14:52.115 "iobuf_small_cache_size": 128, 00:14:52.115 "iobuf_large_cache_size": 16 00:14:52.115 } 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "method": "bdev_raid_set_options", 00:14:52.115 "params": { 00:14:52.115 "process_window_size_kb": 1024, 00:14:52.115 "process_max_bandwidth_mb_sec": 0 00:14:52.115 } 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "method": "bdev_iscsi_set_options", 00:14:52.115 "params": { 00:14:52.115 "timeout_sec": 30 00:14:52.115 } 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "method": "bdev_nvme_set_options", 00:14:52.115 "params": { 00:14:52.115 "action_on_timeout": "none", 00:14:52.115 "timeout_us": 0, 00:14:52.115 "timeout_admin_us": 0, 00:14:52.115 "keep_alive_timeout_ms": 10000, 00:14:52.115 "arbitration_burst": 0, 00:14:52.115 "low_priority_weight": 0, 00:14:52.115 "medium_priority_weight": 0, 00:14:52.115 "high_priority_weight": 0, 00:14:52.115 "nvme_adminq_poll_period_us": 10000, 00:14:52.115 "nvme_ioq_poll_period_us": 0, 00:14:52.115 "io_queue_requests": 512, 00:14:52.115 "delay_cmd_submit": true, 00:14:52.115 "transport_retry_count": 4, 00:14:52.115 "bdev_retry_count": 3, 00:14:52.115 "transport_ack_timeout": 0, 00:14:52.115 "ctrlr_loss_timeout_sec": 0, 00:14:52.115 "reconnect_delay_sec": 0, 00:14:52.115 "fast_io_fail_timeout_sec": 0, 00:14:52.115 "disable_auto_failback": false, 00:14:52.115 "generate_uuids": false, 00:14:52.115 "transport_tos": 0, 00:14:52.115 "nvme_error_stat": false, 00:14:52.115 "rdma_srq_size": 0, 00:14:52.115 "io_path_stat": false, 00:14:52.115 "allow_accel_sequence": false, 00:14:52.115 "rdma_max_cq_size": 0, 00:14:52.115 "rdma_cm_event_timeout_ms": 0, 00:14:52.115 "dhchap_digests": [ 00:14:52.115 "sha256", 00:14:52.115 "sha384", 00:14:52.115 "sha512" 00:14:52.115 ], 00:14:52.115 "dhchap_dhgroups": [ 00:14:52.115 "null", 00:14:52.115 "ffdhe2048", 00:14:52.115 "ffdhe3072", 00:14:52.115 "ffdhe4096", 00:14:52.115 "ffdhe6144", 00:14:52.115 "ffdhe8192" 00:14:52.115 ] 00:14:52.115 } 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "method": "bdev_nvme_attach_controller", 00:14:52.115 "params": { 00:14:52.115 "name": "TLSTEST", 00:14:52.115 "trtype": "TCP", 00:14:52.115 "adrfam": "IPv4", 00:14:52.115 "traddr": "10.0.0.3", 00:14:52.115 "trsvcid": "4420", 00:14:52.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.115 "prchk_reftag": false, 00:14:52.115 "prchk_guard": false, 00:14:52.115 "ctrlr_loss_timeout_sec": 0, 00:14:52.115 "reconnect_delay_sec": 0, 00:14:52.115 "fast_io_fail_timeout_sec": 0, 00:14:52.115 "psk": "key0", 00:14:52.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.115 "hdgst": false, 00:14:52.115 "ddgst": false, 00:14:52.115 "multipath": "multipath" 00:14:52.115 } 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "method": "bdev_nvme_set_hotplug", 00:14:52.115 "params": { 00:14:52.115 "period_us": 100000, 00:14:52.115 "enable": false 00:14:52.115 } 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "method": "bdev_wait_for_examine" 00:14:52.115 } 00:14:52.115 ] 00:14:52.115 }, 00:14:52.115 { 00:14:52.115 "subsystem": "nbd", 00:14:52.115 "config": [] 00:14:52.115 } 00:14:52.115 ] 00:14:52.115 }' 00:14:52.115 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:52.115 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.115 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:52.115 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.115 [2024-10-29 11:03:57.403904] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:14:52.115 [2024-10-29 11:03:57.404038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84931 ] 00:14:52.115 [2024-10-29 11:03:57.559802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.115 [2024-10-29 11:03:57.585018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.374 [2024-10-29 11:03:57.700475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:52.374 [2024-10-29 11:03:57.732066] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:53.308 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:53.308 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:53.308 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:53.308 Running I/O for 10 seconds... 00:14:55.633 3839.00 IOPS, 15.00 MiB/s [2024-10-29T11:04:02.066Z] 4098.50 IOPS, 16.01 MiB/s [2024-10-29T11:04:03.003Z] 4141.33 IOPS, 16.18 MiB/s [2024-10-29T11:04:03.942Z] 4135.25 IOPS, 16.15 MiB/s [2024-10-29T11:04:04.880Z] 4175.40 IOPS, 16.31 MiB/s [2024-10-29T11:04:05.817Z] 4199.67 IOPS, 16.40 MiB/s [2024-10-29T11:04:06.754Z] 4191.43 IOPS, 16.37 MiB/s [2024-10-29T11:04:07.716Z] 4150.00 IOPS, 16.21 MiB/s [2024-10-29T11:04:09.095Z] 4109.44 IOPS, 16.05 MiB/s [2024-10-29T11:04:09.095Z] 4084.30 IOPS, 15.95 MiB/s 00:15:03.598 Latency(us) 00:15:03.598 [2024-10-29T11:04:09.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.598 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:03.598 Verification LBA range: start 0x0 length 0x2000 00:15:03.598 TLSTESTn1 : 10.01 4090.96 15.98 0.00 0.00 31235.77 4140.68 28001.75 00:15:03.598 [2024-10-29T11:04:09.095Z] =================================================================================================================== 00:15:03.598 [2024-10-29T11:04:09.095Z] Total : 4090.96 15.98 0.00 0.00 31235.77 4140.68 28001.75 00:15:03.598 { 00:15:03.598 "results": [ 00:15:03.598 { 00:15:03.598 "job": "TLSTESTn1", 00:15:03.598 "core_mask": "0x4", 00:15:03.598 "workload": "verify", 00:15:03.598 "status": "finished", 00:15:03.598 "verify_range": { 00:15:03.598 "start": 0, 00:15:03.598 "length": 8192 00:15:03.598 }, 00:15:03.598 "queue_depth": 128, 00:15:03.598 "io_size": 4096, 00:15:03.598 "runtime": 10.014509, 00:15:03.598 "iops": 4090.9644197234234, 00:15:03.598 "mibps": 15.980329764544623, 00:15:03.598 "io_failed": 0, 00:15:03.598 "io_timeout": 0, 00:15:03.598 "avg_latency_us": 31235.774125269883, 00:15:03.598 "min_latency_us": 4140.683636363637, 00:15:03.598 "max_latency_us": 28001.745454545453 00:15:03.598 } 00:15:03.598 ], 00:15:03.598 "core_count": 1 00:15:03.598 } 00:15:03.598 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:03.598 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84931 00:15:03.598 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84931 ']' 00:15:03.598 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84931 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84931 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:15:03.599 killing process with pid 84931 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84931' 00:15:03.599 Received shutdown signal, test time was about 10.000000 seconds 00:15:03.599 00:15:03.599 Latency(us) 00:15:03.599 [2024-10-29T11:04:09.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.599 [2024-10-29T11:04:09.096Z] =================================================================================================================== 00:15:03.599 [2024-10-29T11:04:09.096Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84931 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84931 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84899 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 84899 ']' 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 84899 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84899 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:03.599 killing process with pid 84899 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84899' 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 84899 00:15:03.599 11:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 84899 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85064 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85064 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 85064 ']' 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:03.599 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.858 [2024-10-29 11:04:09.146700] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:03.858 [2024-10-29 11:04:09.146794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.858 [2024-10-29 11:04:09.299338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.858 [2024-10-29 11:04:09.325026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.858 [2024-10-29 11:04:09.325093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.858 [2024-10-29 11:04:09.325106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.858 [2024-10-29 11:04:09.325116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.858 [2024-10-29 11:04:09.325136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.858 [2024-10-29 11:04:09.325504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.117 [2024-10-29 11:04:09.366362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:04.117 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:04.117 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:04.117 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:04.117 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:04.117 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.117 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.117 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.F3JpIVjEy0 00:15:04.117 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.F3JpIVjEy0 00:15:04.117 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:04.375 [2024-10-29 11:04:09.733746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.375 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:04.633 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:04.891 [2024-10-29 11:04:10.221854] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:04.891 [2024-10-29 11:04:10.222075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:04.891 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:05.150 malloc0 00:15:05.150 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:05.409 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.F3JpIVjEy0 00:15:05.669 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:05.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:05.928 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=85118 00:15:05.928 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:05.928 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 85118 /var/tmp/bdevperf.sock 00:15:05.928 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:05.928 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 85118 ']' 00:15:05.928 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:05.928 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:05.928 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:05.928 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:05.928 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.186 [2024-10-29 11:04:11.450524] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:06.186 [2024-10-29 11:04:11.450631] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85118 ] 00:15:06.186 [2024-10-29 11:04:11.596496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.186 [2024-10-29 11:04:11.623904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.186 [2024-10-29 11:04:11.662400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:06.443 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:06.443 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:06.443 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F3JpIVjEy0 00:15:06.700 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:06.957 [2024-10-29 11:04:12.394513] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:07.214 nvme0n1 00:15:07.214 11:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:07.214 Running I/O for 1 seconds... 00:15:08.153 3164.00 IOPS, 12.36 MiB/s 00:15:08.153 Latency(us) 00:15:08.153 [2024-10-29T11:04:13.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.153 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:08.153 Verification LBA range: start 0x0 length 0x2000 00:15:08.153 nvme0n1 : 1.02 3237.01 12.64 0.00 0.00 39234.33 4587.52 33363.78 00:15:08.153 [2024-10-29T11:04:13.650Z] =================================================================================================================== 00:15:08.153 [2024-10-29T11:04:13.650Z] Total : 3237.01 12.64 0.00 0.00 39234.33 4587.52 33363.78 00:15:08.153 { 00:15:08.153 "results": [ 00:15:08.153 { 00:15:08.153 "job": "nvme0n1", 00:15:08.153 "core_mask": "0x2", 00:15:08.153 "workload": "verify", 00:15:08.153 "status": "finished", 00:15:08.153 "verify_range": { 00:15:08.153 "start": 0, 00:15:08.153 "length": 8192 00:15:08.153 }, 00:15:08.153 "queue_depth": 128, 00:15:08.153 "io_size": 4096, 00:15:08.153 "runtime": 1.016987, 00:15:08.153 "iops": 3237.0128625046336, 00:15:08.153 "mibps": 12.644581494158725, 00:15:08.153 "io_failed": 0, 00:15:08.153 "io_timeout": 0, 00:15:08.153 "avg_latency_us": 39234.332079973494, 00:15:08.153 "min_latency_us": 4587.52, 00:15:08.153 "max_latency_us": 33363.781818181815 00:15:08.153 } 00:15:08.153 ], 00:15:08.153 "core_count": 1 00:15:08.153 } 00:15:08.153 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 85118 00:15:08.153 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 85118 ']' 00:15:08.153 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 85118 00:15:08.153 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85118 00:15:08.411 killing process with pid 85118 00:15:08.411 Received shutdown signal, test time was about 1.000000 seconds 00:15:08.411 00:15:08.411 Latency(us) 00:15:08.411 [2024-10-29T11:04:13.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.411 [2024-10-29T11:04:13.908Z] =================================================================================================================== 00:15:08.411 [2024-10-29T11:04:13.908Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85118' 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 85118 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 85118 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 85064 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 85064 ']' 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 85064 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85064 00:15:08.411 killing process with pid 85064 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85064' 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 85064 00:15:08.411 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 85064 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85160 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85160 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 85160 ']' 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:08.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:08.669 11:04:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.669 [2024-10-29 11:04:14.063207] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:08.669 [2024-10-29 11:04:14.063319] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.927 [2024-10-29 11:04:14.218926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.927 [2024-10-29 11:04:14.242593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.927 [2024-10-29 11:04:14.242670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.927 [2024-10-29 11:04:14.242685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.927 [2024-10-29 11:04:14.242695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.927 [2024-10-29 11:04:14.242703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.927 [2024-10-29 11:04:14.243071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.927 [2024-10-29 11:04:14.277899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.927 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:08.927 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:08.927 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:08.927 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:08.927 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.927 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.927 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:08.927 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.927 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.927 [2024-10-29 11:04:14.392242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.927 malloc0 00:15:08.927 [2024-10-29 11:04:14.419654] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:08.927 [2024-10-29 11:04:14.420105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:09.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.186 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.186 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=85186 00:15:09.186 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:09.186 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 85186 /var/tmp/bdevperf.sock 00:15:09.186 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 85186 ']' 00:15:09.186 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.186 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:09.186 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.186 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:09.186 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.186 [2024-10-29 11:04:14.511252] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:09.186 [2024-10-29 11:04:14.511405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85186 ] 00:15:09.186 [2024-10-29 11:04:14.658097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.186 [2024-10-29 11:04:14.678989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.444 [2024-10-29 11:04:14.710035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:09.444 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:09.444 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:09.444 11:04:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F3JpIVjEy0 00:15:09.703 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:09.970 [2024-10-29 11:04:15.382747] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:09.970 nvme0n1 00:15:10.246 11:04:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:10.246 Running I/O for 1 seconds... 00:15:11.208 4352.00 IOPS, 17.00 MiB/s 00:15:11.208 Latency(us) 00:15:11.208 [2024-10-29T11:04:16.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.208 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:11.208 Verification LBA range: start 0x0 length 0x2000 00:15:11.208 nvme0n1 : 1.02 4390.22 17.15 0.00 0.00 28868.06 10247.45 21328.99 00:15:11.208 [2024-10-29T11:04:16.705Z] =================================================================================================================== 00:15:11.208 [2024-10-29T11:04:16.705Z] Total : 4390.22 17.15 0.00 0.00 28868.06 10247.45 21328.99 00:15:11.208 { 00:15:11.208 "results": [ 00:15:11.208 { 00:15:11.208 "job": "nvme0n1", 00:15:11.208 "core_mask": "0x2", 00:15:11.208 "workload": "verify", 00:15:11.208 "status": "finished", 00:15:11.208 "verify_range": { 00:15:11.208 "start": 0, 00:15:11.208 "length": 8192 00:15:11.208 }, 00:15:11.208 "queue_depth": 128, 00:15:11.208 "io_size": 4096, 00:15:11.208 "runtime": 1.020449, 00:15:11.208 "iops": 4390.224303223385, 00:15:11.208 "mibps": 17.149313684466346, 00:15:11.208 "io_failed": 0, 00:15:11.208 "io_timeout": 0, 00:15:11.208 "avg_latency_us": 28868.06275324675, 00:15:11.208 "min_latency_us": 10247.447272727273, 00:15:11.208 "max_latency_us": 21328.98909090909 00:15:11.208 } 00:15:11.208 ], 00:15:11.208 "core_count": 1 00:15:11.208 } 00:15:11.208 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:11.208 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.208 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.466 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.466 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:11.466 "subsystems": [ 00:15:11.466 { 00:15:11.466 "subsystem": "keyring", 00:15:11.466 "config": [ 00:15:11.466 { 00:15:11.466 "method": "keyring_file_add_key", 00:15:11.466 "params": { 00:15:11.466 "name": "key0", 00:15:11.466 "path": "/tmp/tmp.F3JpIVjEy0" 00:15:11.466 } 00:15:11.466 } 00:15:11.466 ] 00:15:11.466 }, 00:15:11.466 { 00:15:11.466 "subsystem": "iobuf", 00:15:11.466 "config": [ 00:15:11.466 { 00:15:11.466 "method": "iobuf_set_options", 00:15:11.466 "params": { 00:15:11.466 "small_pool_count": 8192, 00:15:11.466 "large_pool_count": 1024, 00:15:11.466 "small_bufsize": 8192, 00:15:11.466 "large_bufsize": 135168, 00:15:11.466 "enable_numa": false 00:15:11.466 } 00:15:11.466 } 00:15:11.466 ] 00:15:11.466 }, 00:15:11.466 { 00:15:11.466 "subsystem": "sock", 00:15:11.466 "config": [ 00:15:11.466 { 00:15:11.466 "method": "sock_set_default_impl", 00:15:11.466 "params": { 00:15:11.466 "impl_name": "uring" 00:15:11.466 } 00:15:11.466 }, 00:15:11.466 { 00:15:11.466 "method": "sock_impl_set_options", 00:15:11.466 "params": { 00:15:11.466 "impl_name": "ssl", 00:15:11.466 "recv_buf_size": 4096, 00:15:11.466 "send_buf_size": 4096, 00:15:11.466 "enable_recv_pipe": true, 00:15:11.466 "enable_quickack": false, 00:15:11.466 "enable_placement_id": 0, 00:15:11.466 "enable_zerocopy_send_server": true, 00:15:11.466 "enable_zerocopy_send_client": false, 00:15:11.466 "zerocopy_threshold": 0, 00:15:11.466 "tls_version": 0, 00:15:11.466 "enable_ktls": false 00:15:11.466 } 00:15:11.466 }, 00:15:11.466 { 00:15:11.466 "method": "sock_impl_set_options", 00:15:11.466 "params": { 00:15:11.466 "impl_name": "posix", 00:15:11.466 "recv_buf_size": 2097152, 00:15:11.466 "send_buf_size": 2097152, 00:15:11.466 "enable_recv_pipe": true, 00:15:11.466 "enable_quickack": false, 00:15:11.466 "enable_placement_id": 0, 00:15:11.466 "enable_zerocopy_send_server": true, 00:15:11.466 "enable_zerocopy_send_client": false, 00:15:11.466 "zerocopy_threshold": 0, 00:15:11.466 "tls_version": 0, 00:15:11.466 "enable_ktls": false 00:15:11.466 } 00:15:11.466 }, 00:15:11.466 { 00:15:11.466 "method": "sock_impl_set_options", 00:15:11.466 "params": { 00:15:11.466 "impl_name": "uring", 00:15:11.466 "recv_buf_size": 2097152, 00:15:11.466 "send_buf_size": 2097152, 00:15:11.466 "enable_recv_pipe": true, 00:15:11.466 "enable_quickack": false, 00:15:11.466 "enable_placement_id": 0, 00:15:11.466 "enable_zerocopy_send_server": false, 00:15:11.466 "enable_zerocopy_send_client": false, 00:15:11.466 "zerocopy_threshold": 0, 00:15:11.466 "tls_version": 0, 00:15:11.466 "enable_ktls": false 00:15:11.466 } 00:15:11.466 } 00:15:11.466 ] 00:15:11.466 }, 00:15:11.466 { 00:15:11.466 "subsystem": "vmd", 00:15:11.466 "config": [] 00:15:11.466 }, 00:15:11.466 { 00:15:11.466 "subsystem": "accel", 00:15:11.466 "config": [ 00:15:11.466 { 00:15:11.466 "method": "accel_set_options", 00:15:11.466 "params": { 00:15:11.466 "small_cache_size": 128, 00:15:11.466 "large_cache_size": 16, 00:15:11.466 "task_count": 2048, 00:15:11.466 "sequence_count": 2048, 00:15:11.466 "buf_count": 2048 00:15:11.466 } 00:15:11.466 } 00:15:11.466 ] 00:15:11.466 }, 00:15:11.466 { 00:15:11.466 "subsystem": "bdev", 00:15:11.466 "config": [ 00:15:11.466 { 00:15:11.466 "method": "bdev_set_options", 00:15:11.466 "params": { 00:15:11.466 "bdev_io_pool_size": 65535, 00:15:11.466 "bdev_io_cache_size": 256, 00:15:11.466 "bdev_auto_examine": true, 00:15:11.466 "iobuf_small_cache_size": 128, 00:15:11.466 "iobuf_large_cache_size": 16 00:15:11.466 } 00:15:11.466 }, 00:15:11.466 { 00:15:11.467 "method": "bdev_raid_set_options", 00:15:11.467 "params": { 00:15:11.467 "process_window_size_kb": 1024, 00:15:11.467 "process_max_bandwidth_mb_sec": 0 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "bdev_iscsi_set_options", 00:15:11.467 "params": { 00:15:11.467 "timeout_sec": 30 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "bdev_nvme_set_options", 00:15:11.467 "params": { 00:15:11.467 "action_on_timeout": "none", 00:15:11.467 "timeout_us": 0, 00:15:11.467 "timeout_admin_us": 0, 00:15:11.467 "keep_alive_timeout_ms": 10000, 00:15:11.467 "arbitration_burst": 0, 00:15:11.467 "low_priority_weight": 0, 00:15:11.467 "medium_priority_weight": 0, 00:15:11.467 "high_priority_weight": 0, 00:15:11.467 "nvme_adminq_poll_period_us": 10000, 00:15:11.467 "nvme_ioq_poll_period_us": 0, 00:15:11.467 "io_queue_requests": 0, 00:15:11.467 "delay_cmd_submit": true, 00:15:11.467 "transport_retry_count": 4, 00:15:11.467 "bdev_retry_count": 3, 00:15:11.467 "transport_ack_timeout": 0, 00:15:11.467 "ctrlr_loss_timeout_sec": 0, 00:15:11.467 "reconnect_delay_sec": 0, 00:15:11.467 "fast_io_fail_timeout_sec": 0, 00:15:11.467 "disable_auto_failback": false, 00:15:11.467 "generate_uuids": false, 00:15:11.467 "transport_tos": 0, 00:15:11.467 "nvme_error_stat": false, 00:15:11.467 "rdma_srq_size": 0, 00:15:11.467 "io_path_stat": false, 00:15:11.467 "allow_accel_sequence": false, 00:15:11.467 "rdma_max_cq_size": 0, 00:15:11.467 "rdma_cm_event_timeout_ms": 0, 00:15:11.467 "dhchap_digests": [ 00:15:11.467 "sha256", 00:15:11.467 "sha384", 00:15:11.467 "sha512" 00:15:11.467 ], 00:15:11.467 "dhchap_dhgroups": [ 00:15:11.467 "null", 00:15:11.467 "ffdhe2048", 00:15:11.467 "ffdhe3072", 00:15:11.467 "ffdhe4096", 00:15:11.467 "ffdhe6144", 00:15:11.467 "ffdhe8192" 00:15:11.467 ] 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "bdev_nvme_set_hotplug", 00:15:11.467 "params": { 00:15:11.467 "period_us": 100000, 00:15:11.467 "enable": false 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "bdev_malloc_create", 00:15:11.467 "params": { 00:15:11.467 "name": "malloc0", 00:15:11.467 "num_blocks": 8192, 00:15:11.467 "block_size": 4096, 00:15:11.467 "physical_block_size": 4096, 00:15:11.467 "uuid": "3851c035-bbab-4e2f-9465-a17885790e92", 00:15:11.467 "optimal_io_boundary": 0, 00:15:11.467 "md_size": 0, 00:15:11.467 "dif_type": 0, 00:15:11.467 "dif_is_head_of_md": false, 00:15:11.467 "dif_pi_format": 0 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "bdev_wait_for_examine" 00:15:11.467 } 00:15:11.467 ] 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "subsystem": "nbd", 00:15:11.467 "config": [] 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "subsystem": "scheduler", 00:15:11.467 "config": [ 00:15:11.467 { 00:15:11.467 "method": "framework_set_scheduler", 00:15:11.467 "params": { 00:15:11.467 "name": "static" 00:15:11.467 } 00:15:11.467 } 00:15:11.467 ] 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "subsystem": "nvmf", 00:15:11.467 "config": [ 00:15:11.467 { 00:15:11.467 "method": "nvmf_set_config", 00:15:11.467 "params": { 00:15:11.467 "discovery_filter": "match_any", 00:15:11.467 "admin_cmd_passthru": { 00:15:11.467 "identify_ctrlr": false 00:15:11.467 }, 00:15:11.467 "dhchap_digests": [ 00:15:11.467 "sha256", 00:15:11.467 "sha384", 00:15:11.467 "sha512" 00:15:11.467 ], 00:15:11.467 "dhchap_dhgroups": [ 00:15:11.467 "null", 00:15:11.467 "ffdhe2048", 00:15:11.467 "ffdhe3072", 00:15:11.467 "ffdhe4096", 00:15:11.467 "ffdhe6144", 00:15:11.467 "ffdhe8192" 00:15:11.467 ] 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "nvmf_set_max_subsystems", 00:15:11.467 "params": { 00:15:11.467 "max_subsystems": 1024 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "nvmf_set_crdt", 00:15:11.467 "params": { 00:15:11.467 "crdt1": 0, 00:15:11.467 "crdt2": 0, 00:15:11.467 "crdt3": 0 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "nvmf_create_transport", 00:15:11.467 "params": { 00:15:11.467 "trtype": "TCP", 00:15:11.467 "max_queue_depth": 128, 00:15:11.467 "max_io_qpairs_per_ctrlr": 127, 00:15:11.467 "in_capsule_data_size": 4096, 00:15:11.467 "max_io_size": 131072, 00:15:11.467 "io_unit_size": 131072, 00:15:11.467 "max_aq_depth": 128, 00:15:11.467 "num_shared_buffers": 511, 00:15:11.467 "buf_cache_size": 4294967295, 00:15:11.467 "dif_insert_or_strip": false, 00:15:11.467 "zcopy": false, 00:15:11.467 "c2h_success": false, 00:15:11.467 "sock_priority": 0, 00:15:11.467 "abort_timeout_sec": 1, 00:15:11.467 "ack_timeout": 0, 00:15:11.467 "data_wr_pool_size": 0 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "nvmf_create_subsystem", 00:15:11.467 "params": { 00:15:11.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.467 "allow_any_host": false, 00:15:11.467 "serial_number": "00000000000000000000", 00:15:11.467 "model_number": "SPDK bdev Controller", 00:15:11.467 "max_namespaces": 32, 00:15:11.467 "min_cntlid": 1, 00:15:11.467 "max_cntlid": 65519, 00:15:11.467 "ana_reporting": false 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "nvmf_subsystem_add_host", 00:15:11.467 "params": { 00:15:11.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.467 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.467 "psk": "key0" 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "nvmf_subsystem_add_ns", 00:15:11.467 "params": { 00:15:11.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.467 "namespace": { 00:15:11.467 "nsid": 1, 00:15:11.467 "bdev_name": "malloc0", 00:15:11.467 "nguid": "3851C035BBAB4E2F9465A17885790E92", 00:15:11.467 "uuid": "3851c035-bbab-4e2f-9465-a17885790e92", 00:15:11.467 "no_auto_visible": false 00:15:11.467 } 00:15:11.467 } 00:15:11.467 }, 00:15:11.467 { 00:15:11.467 "method": "nvmf_subsystem_add_listener", 00:15:11.467 "params": { 00:15:11.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.467 "listen_address": { 00:15:11.467 "trtype": "TCP", 00:15:11.467 "adrfam": "IPv4", 00:15:11.467 "traddr": "10.0.0.3", 00:15:11.467 "trsvcid": "4420" 00:15:11.467 }, 00:15:11.467 "secure_channel": false, 00:15:11.467 "sock_impl": "ssl" 00:15:11.467 } 00:15:11.467 } 00:15:11.467 ] 00:15:11.467 } 00:15:11.467 ] 00:15:11.467 }' 00:15:11.467 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:11.725 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:11.725 "subsystems": [ 00:15:11.725 { 00:15:11.725 "subsystem": "keyring", 00:15:11.725 "config": [ 00:15:11.725 { 00:15:11.725 "method": "keyring_file_add_key", 00:15:11.725 "params": { 00:15:11.725 "name": "key0", 00:15:11.725 "path": "/tmp/tmp.F3JpIVjEy0" 00:15:11.725 } 00:15:11.725 } 00:15:11.725 ] 00:15:11.725 }, 00:15:11.725 { 00:15:11.725 "subsystem": "iobuf", 00:15:11.725 "config": [ 00:15:11.725 { 00:15:11.725 "method": "iobuf_set_options", 00:15:11.725 "params": { 00:15:11.725 "small_pool_count": 8192, 00:15:11.725 "large_pool_count": 1024, 00:15:11.725 "small_bufsize": 8192, 00:15:11.725 "large_bufsize": 135168, 00:15:11.725 "enable_numa": false 00:15:11.725 } 00:15:11.725 } 00:15:11.725 ] 00:15:11.725 }, 00:15:11.725 { 00:15:11.725 "subsystem": "sock", 00:15:11.725 "config": [ 00:15:11.725 { 00:15:11.725 "method": "sock_set_default_impl", 00:15:11.725 "params": { 00:15:11.725 "impl_name": "uring" 00:15:11.725 } 00:15:11.725 }, 00:15:11.725 { 00:15:11.725 "method": "sock_impl_set_options", 00:15:11.725 "params": { 00:15:11.725 "impl_name": "ssl", 00:15:11.725 "recv_buf_size": 4096, 00:15:11.725 "send_buf_size": 4096, 00:15:11.725 "enable_recv_pipe": true, 00:15:11.725 "enable_quickack": false, 00:15:11.725 "enable_placement_id": 0, 00:15:11.725 "enable_zerocopy_send_server": true, 00:15:11.725 "enable_zerocopy_send_client": false, 00:15:11.725 "zerocopy_threshold": 0, 00:15:11.725 "tls_version": 0, 00:15:11.725 "enable_ktls": false 00:15:11.725 } 00:15:11.725 }, 00:15:11.725 { 00:15:11.725 "method": "sock_impl_set_options", 00:15:11.725 "params": { 00:15:11.725 "impl_name": "posix", 00:15:11.725 "recv_buf_size": 2097152, 00:15:11.725 "send_buf_size": 2097152, 00:15:11.725 "enable_recv_pipe": true, 00:15:11.725 "enable_quickack": false, 00:15:11.725 "enable_placement_id": 0, 00:15:11.725 "enable_zerocopy_send_server": true, 00:15:11.725 "enable_zerocopy_send_client": false, 00:15:11.725 "zerocopy_threshold": 0, 00:15:11.725 "tls_version": 0, 00:15:11.726 "enable_ktls": false 00:15:11.726 } 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "method": "sock_impl_set_options", 00:15:11.726 "params": { 00:15:11.726 "impl_name": "uring", 00:15:11.726 "recv_buf_size": 2097152, 00:15:11.726 "send_buf_size": 2097152, 00:15:11.726 "enable_recv_pipe": true, 00:15:11.726 "enable_quickack": false, 00:15:11.726 "enable_placement_id": 0, 00:15:11.726 "enable_zerocopy_send_server": false, 00:15:11.726 "enable_zerocopy_send_client": false, 00:15:11.726 "zerocopy_threshold": 0, 00:15:11.726 "tls_version": 0, 00:15:11.726 "enable_ktls": false 00:15:11.726 } 00:15:11.726 } 00:15:11.726 ] 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "subsystem": "vmd", 00:15:11.726 "config": [] 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "subsystem": "accel", 00:15:11.726 "config": [ 00:15:11.726 { 00:15:11.726 "method": "accel_set_options", 00:15:11.726 "params": { 00:15:11.726 "small_cache_size": 128, 00:15:11.726 "large_cache_size": 16, 00:15:11.726 "task_count": 2048, 00:15:11.726 "sequence_count": 2048, 00:15:11.726 "buf_count": 2048 00:15:11.726 } 00:15:11.726 } 00:15:11.726 ] 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "subsystem": "bdev", 00:15:11.726 "config": [ 00:15:11.726 { 00:15:11.726 "method": "bdev_set_options", 00:15:11.726 "params": { 00:15:11.726 "bdev_io_pool_size": 65535, 00:15:11.726 "bdev_io_cache_size": 256, 00:15:11.726 "bdev_auto_examine": true, 00:15:11.726 "iobuf_small_cache_size": 128, 00:15:11.726 "iobuf_large_cache_size": 16 00:15:11.726 } 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "method": "bdev_raid_set_options", 00:15:11.726 "params": { 00:15:11.726 "process_window_size_kb": 1024, 00:15:11.726 "process_max_bandwidth_mb_sec": 0 00:15:11.726 } 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "method": "bdev_iscsi_set_options", 00:15:11.726 "params": { 00:15:11.726 "timeout_sec": 30 00:15:11.726 } 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "method": "bdev_nvme_set_options", 00:15:11.726 "params": { 00:15:11.726 "action_on_timeout": "none", 00:15:11.726 "timeout_us": 0, 00:15:11.726 "timeout_admin_us": 0, 00:15:11.726 "keep_alive_timeout_ms": 10000, 00:15:11.726 "arbitration_burst": 0, 00:15:11.726 "low_priority_weight": 0, 00:15:11.726 "medium_priority_weight": 0, 00:15:11.726 "high_priority_weight": 0, 00:15:11.726 "nvme_adminq_poll_period_us": 10000, 00:15:11.726 "nvme_ioq_poll_period_us": 0, 00:15:11.726 "io_queue_requests": 512, 00:15:11.726 "delay_cmd_submit": true, 00:15:11.726 "transport_retry_count": 4, 00:15:11.726 "bdev_retry_count": 3, 00:15:11.726 "transport_ack_timeout": 0, 00:15:11.726 "ctrlr_loss_timeout_sec": 0, 00:15:11.726 "reconnect_delay_sec": 0, 00:15:11.726 "fast_io_fail_timeout_sec": 0, 00:15:11.726 "disable_auto_failback": false, 00:15:11.726 "generate_uuids": false, 00:15:11.726 "transport_tos": 0, 00:15:11.726 "nvme_error_stat": false, 00:15:11.726 "rdma_srq_size": 0, 00:15:11.726 "io_path_stat": false, 00:15:11.726 "allow_accel_sequence": false, 00:15:11.726 "rdma_max_cq_size": 0, 00:15:11.726 "rdma_cm_event_timeout_ms": 0, 00:15:11.726 "dhchap_digests": [ 00:15:11.726 "sha256", 00:15:11.726 "sha384", 00:15:11.726 "sha512" 00:15:11.726 ], 00:15:11.726 "dhchap_dhgroups": [ 00:15:11.726 "null", 00:15:11.726 "ffdhe2048", 00:15:11.726 "ffdhe3072", 00:15:11.726 "ffdhe4096", 00:15:11.726 "ffdhe6144", 00:15:11.726 "ffdhe8192" 00:15:11.726 ] 00:15:11.726 } 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "method": "bdev_nvme_attach_controller", 00:15:11.726 "params": { 00:15:11.726 "name": "nvme0", 00:15:11.726 "trtype": "TCP", 00:15:11.726 "adrfam": "IPv4", 00:15:11.726 "traddr": "10.0.0.3", 00:15:11.726 "trsvcid": "4420", 00:15:11.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.726 "prchk_reftag": false, 00:15:11.726 "prchk_guard": false, 00:15:11.726 "ctrlr_loss_timeout_sec": 0, 00:15:11.726 "reconnect_delay_sec": 0, 00:15:11.726 "fast_io_fail_timeout_sec": 0, 00:15:11.726 "psk": "key0", 00:15:11.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.726 "hdgst": false, 00:15:11.726 "ddgst": false, 00:15:11.726 "multipath": "multipath" 00:15:11.726 } 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "method": "bdev_nvme_set_hotplug", 00:15:11.726 "params": { 00:15:11.726 "period_us": 100000, 00:15:11.726 "enable": false 00:15:11.726 } 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "method": "bdev_enable_histogram", 00:15:11.726 "params": { 00:15:11.726 "name": "nvme0n1", 00:15:11.726 "enable": true 00:15:11.726 } 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "method": "bdev_wait_for_examine" 00:15:11.726 } 00:15:11.726 ] 00:15:11.726 }, 00:15:11.726 { 00:15:11.726 "subsystem": "nbd", 00:15:11.726 "config": [] 00:15:11.726 } 00:15:11.726 ] 00:15:11.726 }' 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 85186 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 85186 ']' 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 85186 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85186 00:15:11.726 killing process with pid 85186 00:15:11.726 Received shutdown signal, test time was about 1.000000 seconds 00:15:11.726 00:15:11.726 Latency(us) 00:15:11.726 [2024-10-29T11:04:17.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.726 [2024-10-29T11:04:17.223Z] =================================================================================================================== 00:15:11.726 [2024-10-29T11:04:17.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85186' 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 85186 00:15:11.726 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 85186 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 85160 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 85160 ']' 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 85160 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85160 00:15:11.985 killing process with pid 85160 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85160' 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 85160 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 85160 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:11.985 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:11.985 "subsystems": [ 00:15:11.985 { 00:15:11.985 "subsystem": "keyring", 00:15:11.985 "config": [ 00:15:11.985 { 00:15:11.985 "method": "keyring_file_add_key", 00:15:11.985 "params": { 00:15:11.985 "name": "key0", 00:15:11.985 "path": "/tmp/tmp.F3JpIVjEy0" 00:15:11.985 } 00:15:11.985 } 00:15:11.985 ] 00:15:11.985 }, 00:15:11.985 { 00:15:11.985 "subsystem": "iobuf", 00:15:11.985 "config": [ 00:15:11.985 { 00:15:11.985 "method": "iobuf_set_options", 00:15:11.985 "params": { 00:15:11.985 "small_pool_count": 8192, 00:15:11.985 "large_pool_count": 1024, 00:15:11.985 "small_bufsize": 8192, 00:15:11.985 "large_bufsize": 135168, 00:15:11.985 "enable_numa": false 00:15:11.985 } 00:15:11.985 } 00:15:11.985 ] 00:15:11.985 }, 00:15:11.985 { 00:15:11.985 "subsystem": "sock", 00:15:11.985 "config": [ 00:15:11.985 { 00:15:11.985 "method": "sock_set_default_impl", 00:15:11.985 "params": { 00:15:11.985 "impl_name": "uring" 00:15:11.985 } 00:15:11.985 }, 00:15:11.985 { 00:15:11.985 "method": "sock_impl_set_options", 00:15:11.985 "params": { 00:15:11.985 "impl_name": "ssl", 00:15:11.985 "recv_buf_size": 4096, 00:15:11.985 "send_buf_size": 4096, 00:15:11.985 "enable_recv_pipe": true, 00:15:11.985 "enable_quickack": false, 00:15:11.985 "enable_placement_id": 0, 00:15:11.985 "enable_zerocopy_send_server": true, 00:15:11.985 "enable_zerocopy_send_client": false, 00:15:11.985 "zerocopy_threshold": 0, 00:15:11.985 "tls_version": 0, 00:15:11.985 "enable_ktls": false 00:15:11.985 } 00:15:11.985 }, 00:15:11.985 { 00:15:11.985 "method": "sock_impl_set_options", 00:15:11.985 "params": { 00:15:11.985 "impl_name": "posix", 00:15:11.985 "recv_buf_size": 2097152, 00:15:11.985 "send_buf_size": 2097152, 00:15:11.985 "enable_recv_pipe": true, 00:15:11.985 "enable_quickack": false, 00:15:11.985 "enable_placement_id": 0, 00:15:11.985 "enable_zerocopy_send_server": true, 00:15:11.985 "enable_zerocopy_send_client": false, 00:15:11.985 "zerocopy_threshold": 0, 00:15:11.985 "tls_version": 0, 00:15:11.985 "enable_ktls": false 00:15:11.985 } 00:15:11.985 }, 00:15:11.985 { 00:15:11.985 "method": "sock_impl_set_options", 00:15:11.985 "params": { 00:15:11.985 "impl_name": "uring", 00:15:11.985 "recv_buf_size": 2097152, 00:15:11.985 "send_buf_size": 2097152, 00:15:11.985 "enable_recv_pipe": true, 00:15:11.985 "enable_quickack": false, 00:15:11.985 "enable_placement_id": 0, 00:15:11.985 "enable_zerocopy_send_server": false, 00:15:11.985 "enable_zerocopy_send_client": false, 00:15:11.985 "zerocopy_threshold": 0, 00:15:11.985 "tls_version": 0, 00:15:11.985 "enable_ktls": false 00:15:11.985 } 00:15:11.985 } 00:15:11.985 ] 00:15:11.985 }, 00:15:11.985 { 00:15:11.985 "subsystem": "vmd", 00:15:11.985 "config": [] 00:15:11.985 }, 00:15:11.985 { 00:15:11.985 "subsystem": "accel", 00:15:11.985 "config": [ 00:15:11.985 { 00:15:11.985 "method": "accel_set_options", 00:15:11.985 "params": { 00:15:11.985 "small_cache_size": 128, 00:15:11.985 "large_cache_size": 16, 00:15:11.985 "task_count": 2048, 00:15:11.985 "sequence_count": 2048, 00:15:11.985 "buf_count": 2048 00:15:11.985 } 00:15:11.985 } 00:15:11.985 ] 00:15:11.985 }, 00:15:11.985 { 00:15:11.985 "subsystem": "bdev", 00:15:11.985 "config": [ 00:15:11.985 { 00:15:11.985 "method": "bdev_set_options", 00:15:11.985 "params": { 00:15:11.985 "bdev_io_pool_size": 65535, 00:15:11.985 "bdev_io_cache_size": 256, 00:15:11.985 "bdev_auto_examine": true, 00:15:11.985 "iobuf_small_cache_size": 128, 00:15:11.985 "iobuf_large_cache_size": 16 00:15:11.985 } 00:15:11.985 }, 00:15:11.985 { 00:15:11.985 "method": "bdev_raid_set_options", 00:15:11.985 "params": { 00:15:11.985 "process_window_size_kb": 1024, 00:15:11.985 "process_max_bandwidth_mb_sec": 0 00:15:11.985 } 00:15:11.985 }, 00:15:11.985 { 00:15:11.985 "method": "bdev_iscsi_set_options", 00:15:11.985 "params": { 00:15:11.985 "timeout_sec": 30 00:15:11.985 } 00:15:11.985 }, 00:15:11.985 { 00:15:11.986 "method": "bdev_nvme_set_options", 00:15:11.986 "params": { 00:15:11.986 "action_on_timeout": "none", 00:15:11.986 "timeout_us": 0, 00:15:11.986 "timeout_admin_us": 0, 00:15:11.986 "keep_alive_timeout_ms": 10000, 00:15:11.986 "arbitration_burst": 0, 00:15:11.986 "low_priority_weight": 0, 00:15:11.986 "medium_priority_weight": 0, 00:15:11.986 "high_priority_weight": 0, 00:15:11.986 "nvme_adminq_poll_period_us": 10000, 00:15:11.986 "nvme_ioq_poll_period_us": 0, 00:15:11.986 "io_queue_requests": 0, 00:15:11.986 "delay_cmd_submit": true, 00:15:11.986 "transport_retry_count": 4, 00:15:11.986 "bdev_retry_count": 3, 00:15:11.986 "transport_ack_timeout": 0, 00:15:11.986 "ctrlr_loss_timeout_sec": 0, 00:15:11.986 "reconnect_delay_sec": 0, 00:15:11.986 "fast_io_fail_timeout_sec": 0, 00:15:11.986 "disable_auto_failback": false, 00:15:11.986 "generate_uuids": false, 00:15:11.986 "transport_tos": 0, 00:15:11.986 "nvme_error_stat": false, 00:15:11.986 "rdma_srq_size": 0, 00:15:11.986 "io_path_stat": false, 00:15:11.986 "allow_accel_sequence": false, 00:15:11.986 "rdma_max_cq_size": 0, 00:15:11.986 "rdma_cm_event_timeout_ms": 0, 00:15:11.986 "dhchap_digests": [ 00:15:11.986 "sha256", 00:15:11.986 "sha384", 00:15:11.986 "sha512" 00:15:11.986 ], 00:15:11.986 "dhchap_dhgroups": [ 00:15:11.986 "null", 00:15:11.986 "ffdhe2048", 00:15:11.986 "ffdhe3072", 00:15:11.986 "ffdhe4096", 00:15:11.986 "ffdhe6144", 00:15:11.986 "ffdhe8192" 00:15:11.986 ] 00:15:11.986 } 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "method": "bdev_nvme_set_hotplug", 00:15:11.986 "params": { 00:15:11.986 "period_us": 100000, 00:15:11.986 "enable": false 00:15:11.986 } 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "method": "bdev_malloc_create", 00:15:11.986 "params": { 00:15:11.986 "name": "malloc0", 00:15:11.986 "num_blocks": 8192, 00:15:11.986 "block_size": 4096, 00:15:11.986 "physical_block_size": 4096, 00:15:11.986 "uuid": "3851c035-bbab-4e2f-9465-a17885790e92", 00:15:11.986 "optimal_io_boundary": 0, 00:15:11.986 "md_size": 0, 00:15:11.986 "dif_type": 0, 00:15:11.986 "dif_is_head_of_md": false, 00:15:11.986 "dif_pi_format": 0 00:15:11.986 } 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "method": "bdev_wait_for_examine" 00:15:11.986 } 00:15:11.986 ] 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "subsystem": "nbd", 00:15:11.986 "config": [] 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "subsystem": "scheduler", 00:15:11.986 "config": [ 00:15:11.986 { 00:15:11.986 "method": "framework_set_scheduler", 00:15:11.986 "params": { 00:15:11.986 "name": "static" 00:15:11.986 } 00:15:11.986 } 00:15:11.986 ] 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "subsystem": "nvmf", 00:15:11.986 "config": [ 00:15:11.986 { 00:15:11.986 "method": "nvmf_set_config", 00:15:11.986 "params": { 00:15:11.986 "discovery_filter": "match_any", 00:15:11.986 "admin_cmd_passthru": { 00:15:11.986 "identify_ctrlr": false 00:15:11.986 }, 00:15:11.986 "dhchap_digests": [ 00:15:11.986 "sha256", 00:15:11.986 "sha384", 00:15:11.986 "sha512" 00:15:11.986 ], 00:15:11.986 "dhchap_dhgroups": [ 00:15:11.986 "null", 00:15:11.986 "ffdhe2048", 00:15:11.986 "ffdhe3072", 00:15:11.986 "ffdhe4096", 00:15:11.986 "ffdhe6144", 00:15:11.986 "ffdhe8192" 00:15:11.986 ] 00:15:11.986 } 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "method": "nvmf_set_max_subsystems", 00:15:11.986 "params": { 00:15:11.986 "max_subsystems": 1024 00:15:11.986 } 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "method": "nvmf_set_crdt", 00:15:11.986 "params": { 00:15:11.986 "crdt1": 0, 00:15:11.986 "crdt2": 0, 00:15:11.986 "crdt3": 0 00:15:11.986 } 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "method": "nvmf_create_transport", 00:15:11.986 "params": { 00:15:11.986 "trtype": "TCP", 00:15:11.986 "max_queue_depth": 128, 00:15:11.986 "max_io_qpairs_per_ctrlr": 127, 00:15:11.986 "in_capsule_data_size": 4096, 00:15:11.986 "max_io_size": 131072, 00:15:11.986 "io_unit_size": 131072, 00:15:11.986 "max_aq_depth": 128, 00:15:11.986 "num_shared_buffers": 511, 00:15:11.986 "buf_cache_size": 4294967295, 00:15:11.986 "dif_insert_or_strip": false, 00:15:11.986 "zcopy": false, 00:15:11.986 "c2h_success": false, 00:15:11.986 "sock_priority": 0, 00:15:11.986 "abort_timeout_sec": 1, 00:15:11.986 "ack_timeout": 0, 00:15:11.986 "data_wr_pool_size": 0 00:15:11.986 } 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "method": "nvmf_create_subsystem", 00:15:11.986 "params": { 00:15:11.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.986 "allow_any_host": false, 00:15:11.986 "serial_number": "00000000000000000000", 00:15:11.986 "model_number": "SPDK bdev Controller", 00:15:11.986 "max_namespaces": 32, 00:15:11.986 "min_cntlid": 1, 00:15:11.986 "max_cntlid": 65519, 00:15:11.986 "ana_reporting": false 00:15:11.986 } 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "method": "nvmf_subsystem_add_host", 00:15:11.986 "params": { 00:15:11.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.986 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.986 "psk": "key0" 00:15:11.986 } 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "method": "nvmf_subsystem_add_ns", 00:15:11.986 "params": { 00:15:11.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.986 "namespace": { 00:15:11.986 "nsid": 1, 00:15:11.986 "bdev_name": "malloc0", 00:15:11.986 "nguid": "3851C035BBAB4E2F9465A17885790E92", 00:15:11.986 "uuid": "3851c035-bbab-4e2f-9465-a17885790e92", 00:15:11.986 "no_auto_visible": false 00:15:11.986 } 00:15:11.986 } 00:15:11.986 }, 00:15:11.986 { 00:15:11.986 "method": "nvmf_subsystem_add_listener", 00:15:11.986 "params": { 00:15:11.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.986 "listen_address": { 00:15:11.986 "trtype": "TCP", 00:15:11.986 "adrfam": "IPv4", 00:15:11.986 "traddr": "10.0.0.3", 00:15:11.986 "trsvcid": "4420" 00:15:11.986 }, 00:15:11.986 "secure_channel": false, 00:15:11.986 "sock_impl": "ssl" 00:15:11.986 } 00:15:11.986 } 00:15:11.986 ] 00:15:11.986 } 00:15:11.986 ] 00:15:11.986 }' 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85238 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85238 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 85238 ']' 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:11.986 11:04:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.986 [2024-10-29 11:04:17.475214] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:11.986 [2024-10-29 11:04:17.475310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.245 [2024-10-29 11:04:17.625167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.245 [2024-10-29 11:04:17.643057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.245 [2024-10-29 11:04:17.643127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.245 [2024-10-29 11:04:17.643152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.245 [2024-10-29 11:04:17.643160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.245 [2024-10-29 11:04:17.643166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.245 [2024-10-29 11:04:17.643531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.503 [2024-10-29 11:04:17.784515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:12.503 [2024-10-29 11:04:17.837436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.503 [2024-10-29 11:04:17.869361] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:12.503 [2024-10-29 11:04:17.869596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=85266 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 85266 /var/tmp/bdevperf.sock 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 85266 ']' 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:13.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.070 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:13.070 "subsystems": [ 00:15:13.070 { 00:15:13.070 "subsystem": "keyring", 00:15:13.070 "config": [ 00:15:13.070 { 00:15:13.070 "method": "keyring_file_add_key", 00:15:13.070 "params": { 00:15:13.070 "name": "key0", 00:15:13.070 "path": "/tmp/tmp.F3JpIVjEy0" 00:15:13.070 } 00:15:13.070 } 00:15:13.070 ] 00:15:13.070 }, 00:15:13.070 { 00:15:13.070 "subsystem": "iobuf", 00:15:13.070 "config": [ 00:15:13.070 { 00:15:13.070 "method": "iobuf_set_options", 00:15:13.070 "params": { 00:15:13.070 "small_pool_count": 8192, 00:15:13.070 "large_pool_count": 1024, 00:15:13.070 "small_bufsize": 8192, 00:15:13.070 "large_bufsize": 135168, 00:15:13.070 "enable_numa": false 00:15:13.070 } 00:15:13.070 } 00:15:13.070 ] 00:15:13.070 }, 00:15:13.070 { 00:15:13.070 "subsystem": "sock", 00:15:13.070 "config": [ 00:15:13.070 { 00:15:13.070 "method": "sock_set_default_impl", 00:15:13.070 "params": { 00:15:13.070 "impl_name": "uring" 00:15:13.070 } 00:15:13.070 }, 00:15:13.070 { 00:15:13.070 "method": "sock_impl_set_options", 00:15:13.070 "params": { 00:15:13.070 "impl_name": "ssl", 00:15:13.070 "recv_buf_size": 4096, 00:15:13.070 "send_buf_size": 4096, 00:15:13.070 "enable_recv_pipe": true, 00:15:13.070 "enable_quickack": false, 00:15:13.070 "enable_placement_id": 0, 00:15:13.070 "enable_zerocopy_send_server": true, 00:15:13.070 "enable_zerocopy_send_client": false, 00:15:13.070 "zerocopy_threshold": 0, 00:15:13.070 "tls_version": 0, 00:15:13.070 "enable_ktls": false 00:15:13.070 } 00:15:13.070 }, 00:15:13.070 { 00:15:13.070 "method": "sock_impl_set_options", 00:15:13.070 "params": { 00:15:13.070 "impl_name": "posix", 00:15:13.070 "recv_buf_size": 2097152, 00:15:13.070 "send_buf_size": 2097152, 00:15:13.070 "enable_recv_pipe": true, 00:15:13.070 "enable_quickack": false, 00:15:13.070 "enable_placement_id": 0, 00:15:13.070 "enable_zerocopy_send_server": true, 00:15:13.070 "enable_zerocopy_send_client": false, 00:15:13.070 "zerocopy_threshold": 0, 00:15:13.070 "tls_version": 0, 00:15:13.070 "enable_ktls": false 00:15:13.070 } 00:15:13.070 }, 00:15:13.070 { 00:15:13.070 "method": "sock_impl_set_options", 00:15:13.070 "params": { 00:15:13.070 "impl_name": "uring", 00:15:13.070 "recv_buf_size": 2097152, 00:15:13.070 "send_buf_size": 2097152, 00:15:13.070 "enable_recv_pipe": true, 00:15:13.070 "enable_quickack": false, 00:15:13.070 "enable_placement_id": 0, 00:15:13.070 "enable_zerocopy_send_server": false, 00:15:13.070 "enable_zerocopy_send_client": false, 00:15:13.070 "zerocopy_threshold": 0, 00:15:13.070 "tls_version": 0, 00:15:13.070 "enable_ktls": false 00:15:13.070 } 00:15:13.070 } 00:15:13.070 ] 00:15:13.070 }, 00:15:13.070 { 00:15:13.070 "subsystem": "vmd", 00:15:13.070 "config": [] 00:15:13.071 }, 00:15:13.071 { 00:15:13.071 "subsystem": "accel", 00:15:13.071 "config": [ 00:15:13.071 { 00:15:13.071 "method": "accel_set_options", 00:15:13.071 "params": { 00:15:13.071 "small_cache_size": 128, 00:15:13.071 "large_cache_size": 16, 00:15:13.071 "task_count": 2048, 00:15:13.071 "sequence_count": 2048, 00:15:13.071 "buf_count": 2048 00:15:13.071 } 00:15:13.071 } 00:15:13.071 ] 00:15:13.071 }, 00:15:13.071 { 00:15:13.071 "subsystem": "bdev", 00:15:13.071 "config": [ 00:15:13.071 { 00:15:13.071 "method": "bdev_set_options", 00:15:13.071 "params": { 00:15:13.071 "bdev_io_pool_size": 65535, 00:15:13.071 "bdev_io_cache_size": 256, 00:15:13.071 "bdev_auto_examine": true, 00:15:13.071 "iobuf_small_cache_size": 128, 00:15:13.071 "iobuf_large_cache_size": 16 00:15:13.071 } 00:15:13.071 }, 00:15:13.071 { 00:15:13.071 "method": "bdev_raid_set_options", 00:15:13.071 "params": { 00:15:13.071 "process_window_size_kb": 1024, 00:15:13.071 "process_max_bandwidth_mb_sec": 0 00:15:13.071 } 00:15:13.071 }, 00:15:13.071 { 00:15:13.071 "method": "bdev_iscsi_set_options", 00:15:13.071 "params": { 00:15:13.071 "timeout_sec": 30 00:15:13.071 } 00:15:13.071 }, 00:15:13.071 { 00:15:13.071 "method": "bdev_nvme_set_options", 00:15:13.071 "params": { 00:15:13.071 "action_on_timeout": "none", 00:15:13.071 "timeout_us": 0, 00:15:13.071 "timeout_admin_us": 0, 00:15:13.071 "keep_alive_timeout_ms": 10000, 00:15:13.071 "arbitration_burst": 0, 00:15:13.071 "low_priority_weight": 0, 00:15:13.071 "medium_priority_weight": 0, 00:15:13.071 "high_priority_weight": 0, 00:15:13.071 "nvme_adminq_poll_period_us": 10000, 00:15:13.071 "nvme_ioq_poll_period_us": 0, 00:15:13.071 "io_queue_requests": 512, 00:15:13.071 "delay_cmd_submit": true, 00:15:13.071 "transport_retry_count": 4, 00:15:13.071 "bdev_retry_count": 3, 00:15:13.071 "transport_ack_timeout": 0, 00:15:13.071 "ctrlr_loss_timeout_sec": 0, 00:15:13.071 "reconnect_delay_sec": 0, 00:15:13.071 "fast_io_fail_timeout_sec": 0, 00:15:13.071 "disable_auto_failback": false, 00:15:13.071 "generate_uuids": false, 00:15:13.071 "transport_tos": 0, 00:15:13.071 "nvme_error_stat": false, 00:15:13.071 "rdma_srq_size": 0, 00:15:13.071 "io_path_stat": false, 00:15:13.071 "allow_accel_sequence": false, 00:15:13.071 "rdma_max_cq_size": 0, 00:15:13.071 "rdma_cm_event_timeout_ms": 0, 00:15:13.071 "dhchap_digests": [ 00:15:13.071 "sha256", 00:15:13.071 "sha384", 00:15:13.071 "sha512" 00:15:13.071 ], 00:15:13.071 "dhchap_dhgroups": [ 00:15:13.071 "null", 00:15:13.071 "ffdhe2048", 00:15:13.071 "ffdhe3072", 00:15:13.071 "ffdhe4096", 00:15:13.071 "ffdhe6144", 00:15:13.071 "ffdhe8192" 00:15:13.071 ] 00:15:13.071 } 00:15:13.071 }, 00:15:13.071 { 00:15:13.071 "method": "bdev_nvme_attach_controller", 00:15:13.071 "params": { 00:15:13.071 "name": "nvme0", 00:15:13.071 "trtype": "TCP", 00:15:13.071 "adrfam": "IPv4", 00:15:13.071 "traddr": "10.0.0.3", 00:15:13.071 "trsvcid": "4420", 00:15:13.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.071 "prchk_reftag": false, 00:15:13.071 "prchk_guard": false, 00:15:13.071 "ctrlr_loss_timeout_sec": 0, 00:15:13.071 "reconnect_delay_sec": 0, 00:15:13.071 "fast_io_fail_timeout_sec": 0, 00:15:13.071 "psk": "key0", 00:15:13.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.071 "hdgst": false, 00:15:13.071 "ddgst": false, 00:15:13.071 "multipath": "multipath" 00:15:13.071 } 00:15:13.071 }, 00:15:13.071 { 00:15:13.071 "method": "bdev_nvme_set_hotplug", 00:15:13.071 "params": { 00:15:13.071 "period_us": 100000, 00:15:13.071 "enable": false 00:15:13.071 } 00:15:13.071 }, 00:15:13.071 { 00:15:13.071 "method": "bdev_enable_histogram", 00:15:13.071 "params": { 00:15:13.071 "name": "nvme0n1", 00:15:13.071 "enable": true 00:15:13.071 } 00:15:13.071 }, 00:15:13.071 { 00:15:13.071 "method": "bdev_wait_for_examine" 00:15:13.071 } 00:15:13.071 ] 00:15:13.071 }, 00:15:13.071 { 00:15:13.071 "subsystem": "nbd", 00:15:13.071 "config": [] 00:15:13.071 } 00:15:13.071 ] 00:15:13.071 }' 00:15:13.071 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:13.071 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.329 [2024-10-29 11:04:18.576200] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:13.329 [2024-10-29 11:04:18.576278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85266 ] 00:15:13.329 [2024-10-29 11:04:18.720495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.329 [2024-10-29 11:04:18.740360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.587 [2024-10-29 11:04:18.850129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.587 [2024-10-29 11:04:18.878004] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.587 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.587 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:15:13.587 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:13.587 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:13.846 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.846 11:04:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:14.104 Running I/O for 1 seconds... 00:15:15.037 4441.00 IOPS, 17.35 MiB/s 00:15:15.037 Latency(us) 00:15:15.037 [2024-10-29T11:04:20.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.037 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:15.037 Verification LBA range: start 0x0 length 0x2000 00:15:15.037 nvme0n1 : 1.02 4464.92 17.44 0.00 0.00 28323.89 7745.16 20733.21 00:15:15.037 [2024-10-29T11:04:20.534Z] =================================================================================================================== 00:15:15.037 [2024-10-29T11:04:20.534Z] Total : 4464.92 17.44 0.00 0.00 28323.89 7745.16 20733.21 00:15:15.037 { 00:15:15.037 "results": [ 00:15:15.037 { 00:15:15.037 "job": "nvme0n1", 00:15:15.037 "core_mask": "0x2", 00:15:15.037 "workload": "verify", 00:15:15.037 "status": "finished", 00:15:15.037 "verify_range": { 00:15:15.037 "start": 0, 00:15:15.037 "length": 8192 00:15:15.037 }, 00:15:15.037 "queue_depth": 128, 00:15:15.037 "io_size": 4096, 00:15:15.037 "runtime": 1.02331, 00:15:15.037 "iops": 4464.9226529595135, 00:15:15.037 "mibps": 17.4411041131231, 00:15:15.037 "io_failed": 0, 00:15:15.037 "io_timeout": 0, 00:15:15.037 "avg_latency_us": 28323.890121172328, 00:15:15.037 "min_latency_us": 7745.163636363636, 00:15:15.037 "max_latency_us": 20733.20727272727 00:15:15.037 } 00:15:15.037 ], 00:15:15.037 "core_count": 1 00:15:15.037 } 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:15.037 nvmf_trace.0 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:15:15.037 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 85266 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 85266 ']' 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 85266 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85266 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85266' 00:15:15.296 killing process with pid 85266 00:15:15.296 Received shutdown signal, test time was about 1.000000 seconds 00:15:15.296 00:15:15.296 Latency(us) 00:15:15.296 [2024-10-29T11:04:20.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.296 [2024-10-29T11:04:20.793Z] =================================================================================================================== 00:15:15.296 [2024-10-29T11:04:20.793Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 85266 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 85266 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:15.296 rmmod nvme_tcp 00:15:15.296 rmmod nvme_fabrics 00:15:15.296 rmmod nvme_keyring 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 85238 ']' 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 85238 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 85238 ']' 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 85238 00:15:15.296 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85238 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:15.554 killing process with pid 85238 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85238' 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 85238 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 85238 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:15.554 11:04:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:15.554 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:15.554 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:15.554 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:15.554 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:15.554 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:15.554 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.zrHpwTUfu4 /tmp/tmp.knsncvYMXJ /tmp/tmp.F3JpIVjEy0 00:15:15.813 00:15:15.813 real 1m20.193s 00:15:15.813 user 2m11.818s 00:15:15.813 sys 0m26.307s 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.813 ************************************ 00:15:15.813 END TEST nvmf_tls 00:15:15.813 ************************************ 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:15.813 ************************************ 00:15:15.813 START TEST nvmf_fips 00:15:15.813 ************************************ 00:15:15.813 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:16.073 * Looking for test storage... 00:15:16.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:16.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.073 --rc genhtml_branch_coverage=1 00:15:16.073 --rc genhtml_function_coverage=1 00:15:16.073 --rc genhtml_legend=1 00:15:16.073 --rc geninfo_all_blocks=1 00:15:16.073 --rc geninfo_unexecuted_blocks=1 00:15:16.073 00:15:16.073 ' 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:16.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.073 --rc genhtml_branch_coverage=1 00:15:16.073 --rc genhtml_function_coverage=1 00:15:16.073 --rc genhtml_legend=1 00:15:16.073 --rc geninfo_all_blocks=1 00:15:16.073 --rc geninfo_unexecuted_blocks=1 00:15:16.073 00:15:16.073 ' 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:16.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.073 --rc genhtml_branch_coverage=1 00:15:16.073 --rc genhtml_function_coverage=1 00:15:16.073 --rc genhtml_legend=1 00:15:16.073 --rc geninfo_all_blocks=1 00:15:16.073 --rc geninfo_unexecuted_blocks=1 00:15:16.073 00:15:16.073 ' 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:16.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.073 --rc genhtml_branch_coverage=1 00:15:16.073 --rc genhtml_function_coverage=1 00:15:16.073 --rc genhtml_legend=1 00:15:16.073 --rc geninfo_all_blocks=1 00:15:16.073 --rc geninfo_unexecuted_blocks=1 00:15:16.073 00:15:16.073 ' 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.073 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:16.074 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:16.074 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:16.332 Error setting digest 00:15:16.332 406255838F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:16.332 406255838F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:16.332 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:16.333 Cannot find device "nvmf_init_br" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:16.333 Cannot find device "nvmf_init_br2" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:16.333 Cannot find device "nvmf_tgt_br" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.333 Cannot find device "nvmf_tgt_br2" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:16.333 Cannot find device "nvmf_init_br" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:16.333 Cannot find device "nvmf_init_br2" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:16.333 Cannot find device "nvmf_tgt_br" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:16.333 Cannot find device "nvmf_tgt_br2" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:16.333 Cannot find device "nvmf_br" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:16.333 Cannot find device "nvmf_init_if" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:16.333 Cannot find device "nvmf_init_if2" 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.333 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:16.592 11:04:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:16.592 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.592 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:15:16.592 00:15:16.592 --- 10.0.0.3 ping statistics --- 00:15:16.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.592 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:16.592 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:16.592 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:15:16.592 00:15:16.592 --- 10.0.0.4 ping statistics --- 00:15:16.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.592 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:16.592 00:15:16.592 --- 10.0.0.1 ping statistics --- 00:15:16.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.592 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:16.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:15:16.592 00:15:16.592 --- 10.0.0.2 ping statistics --- 00:15:16.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.592 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=85577 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 85577 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 85577 ']' 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:16.592 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:16.850 [2024-10-29 11:04:22.148014] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:16.850 [2024-10-29 11:04:22.148160] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.850 [2024-10-29 11:04:22.298764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.850 [2024-10-29 11:04:22.320533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.850 [2024-10-29 11:04:22.320596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.850 [2024-10-29 11:04:22.320609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.850 [2024-10-29 11:04:22.320619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.850 [2024-10-29 11:04:22.320628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.850 [2024-10-29 11:04:22.320973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.109 [2024-10-29 11:04:22.353431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.gBl 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.gBl 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.gBl 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.gBl 00:15:17.109 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.368 [2024-10-29 11:04:22.731108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.368 [2024-10-29 11:04:22.747063] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:17.368 [2024-10-29 11:04:22.747248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:17.368 malloc0 00:15:17.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.368 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:17.368 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:17.368 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85607 00:15:17.368 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85607 /var/tmp/bdevperf.sock 00:15:17.368 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 85607 ']' 00:15:17.368 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.368 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:17.369 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.369 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:17.369 11:04:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:17.628 [2024-10-29 11:04:22.875000] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:17.628 [2024-10-29 11:04:22.875088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85607 ] 00:15:17.628 [2024-10-29 11:04:23.017654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.629 [2024-10-29 11:04:23.038031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.629 [2024-10-29 11:04:23.066853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.629 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:17.629 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:15:17.629 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.gBl 00:15:18.197 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:18.197 [2024-10-29 11:04:23.626495] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:18.456 TLSTESTn1 00:15:18.456 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:18.456 Running I/O for 10 seconds... 00:15:20.328 4260.00 IOPS, 16.64 MiB/s [2024-10-29T11:04:27.205Z] 4284.00 IOPS, 16.73 MiB/s [2024-10-29T11:04:28.140Z] 4224.00 IOPS, 16.50 MiB/s [2024-10-29T11:04:29.074Z] 4187.50 IOPS, 16.36 MiB/s [2024-10-29T11:04:30.011Z] 4134.20 IOPS, 16.15 MiB/s [2024-10-29T11:04:30.945Z] 4121.50 IOPS, 16.10 MiB/s [2024-10-29T11:04:31.879Z] 4078.14 IOPS, 15.93 MiB/s [2024-10-29T11:04:32.812Z] 4073.25 IOPS, 15.91 MiB/s [2024-10-29T11:04:34.189Z] 4059.67 IOPS, 15.86 MiB/s [2024-10-29T11:04:34.189Z] 4053.90 IOPS, 15.84 MiB/s 00:15:28.692 Latency(us) 00:15:28.692 [2024-10-29T11:04:34.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.692 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:28.692 Verification LBA range: start 0x0 length 0x2000 00:15:28.692 TLSTESTn1 : 10.02 4059.81 15.86 0.00 0.00 31470.92 5600.35 30504.03 00:15:28.692 [2024-10-29T11:04:34.189Z] =================================================================================================================== 00:15:28.692 [2024-10-29T11:04:34.189Z] Total : 4059.81 15.86 0.00 0.00 31470.92 5600.35 30504.03 00:15:28.692 { 00:15:28.692 "results": [ 00:15:28.692 { 00:15:28.692 "job": "TLSTESTn1", 00:15:28.692 "core_mask": "0x4", 00:15:28.692 "workload": "verify", 00:15:28.692 "status": "finished", 00:15:28.692 "verify_range": { 00:15:28.692 "start": 0, 00:15:28.692 "length": 8192 00:15:28.692 }, 00:15:28.692 "queue_depth": 128, 00:15:28.692 "io_size": 4096, 00:15:28.692 "runtime": 10.015732, 00:15:28.692 "iops": 4059.8131020278897, 00:15:28.692 "mibps": 15.858644929796444, 00:15:28.692 "io_failed": 0, 00:15:28.692 "io_timeout": 0, 00:15:28.692 "avg_latency_us": 31470.9170329233, 00:15:28.692 "min_latency_us": 5600.349090909091, 00:15:28.692 "max_latency_us": 30504.02909090909 00:15:28.692 } 00:15:28.692 ], 00:15:28.692 "core_count": 1 00:15:28.692 } 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:28.692 nvmf_trace.0 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85607 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 85607 ']' 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 85607 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85607 00:15:28.692 killing process with pid 85607 00:15:28.692 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.692 00:15:28.692 Latency(us) 00:15:28.692 [2024-10-29T11:04:34.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.692 [2024-10-29T11:04:34.189Z] =================================================================================================================== 00:15:28.692 [2024-10-29T11:04:34.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85607' 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 85607 00:15:28.692 11:04:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 85607 00:15:28.692 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:28.692 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:28.692 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:28.692 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:28.692 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:28.692 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:28.692 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:28.692 rmmod nvme_tcp 00:15:28.692 rmmod nvme_fabrics 00:15:28.951 rmmod nvme_keyring 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 85577 ']' 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 85577 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 85577 ']' 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 85577 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85577 00:15:28.951 killing process with pid 85577 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85577' 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 85577 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 85577 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:28.951 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:29.209 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.209 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:29.209 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:29.209 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:29.209 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.gBl 00:15:29.210 00:15:29.210 real 0m13.393s 00:15:29.210 user 0m18.282s 00:15:29.210 sys 0m5.522s 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:29.210 ************************************ 00:15:29.210 END TEST nvmf_fips 00:15:29.210 ************************************ 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.210 ************************************ 00:15:29.210 START TEST nvmf_control_msg_list 00:15:29.210 ************************************ 00:15:29.210 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:29.470 * Looking for test storage... 00:15:29.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:29.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.470 --rc genhtml_branch_coverage=1 00:15:29.470 --rc genhtml_function_coverage=1 00:15:29.470 --rc genhtml_legend=1 00:15:29.470 --rc geninfo_all_blocks=1 00:15:29.470 --rc geninfo_unexecuted_blocks=1 00:15:29.470 00:15:29.470 ' 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:29.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.470 --rc genhtml_branch_coverage=1 00:15:29.470 --rc genhtml_function_coverage=1 00:15:29.470 --rc genhtml_legend=1 00:15:29.470 --rc geninfo_all_blocks=1 00:15:29.470 --rc geninfo_unexecuted_blocks=1 00:15:29.470 00:15:29.470 ' 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:29.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.470 --rc genhtml_branch_coverage=1 00:15:29.470 --rc genhtml_function_coverage=1 00:15:29.470 --rc genhtml_legend=1 00:15:29.470 --rc geninfo_all_blocks=1 00:15:29.470 --rc geninfo_unexecuted_blocks=1 00:15:29.470 00:15:29.470 ' 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:29.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.470 --rc genhtml_branch_coverage=1 00:15:29.470 --rc genhtml_function_coverage=1 00:15:29.470 --rc genhtml_legend=1 00:15:29.470 --rc geninfo_all_blocks=1 00:15:29.470 --rc geninfo_unexecuted_blocks=1 00:15:29.470 00:15:29.470 ' 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.470 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.471 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:29.471 Cannot find device "nvmf_init_br" 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:29.471 Cannot find device "nvmf_init_br2" 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:29.471 Cannot find device "nvmf_tgt_br" 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.471 Cannot find device "nvmf_tgt_br2" 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:29.471 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:29.730 Cannot find device "nvmf_init_br" 00:15:29.730 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:29.730 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:29.730 Cannot find device "nvmf_init_br2" 00:15:29.730 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:29.730 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:29.730 Cannot find device "nvmf_tgt_br" 00:15:29.730 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:29.730 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:29.730 Cannot find device "nvmf_tgt_br2" 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:29.730 Cannot find device "nvmf_br" 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:29.730 Cannot find device "nvmf_init_if" 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:29.730 Cannot find device "nvmf_init_if2" 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:29.730 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:29.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:15:29.990 00:15:29.990 --- 10.0.0.3 ping statistics --- 00:15:29.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.990 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:29.990 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:29.990 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:15:29.990 00:15:29.990 --- 10.0.0.4 ping statistics --- 00:15:29.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.990 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:29.990 00:15:29.990 --- 10.0.0.1 ping statistics --- 00:15:29.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.990 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:29.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:15:29.990 00:15:29.990 --- 10.0.0.2 ping statistics --- 00:15:29.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.990 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85986 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85986 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 85986 ']' 00:15:29.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:29.990 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:29.990 [2024-10-29 11:04:35.370805] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:29.990 [2024-10-29 11:04:35.370894] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.250 [2024-10-29 11:04:35.525618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.250 [2024-10-29 11:04:35.548914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.250 [2024-10-29 11:04:35.548974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.250 [2024-10-29 11:04:35.548988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.250 [2024-10-29 11:04:35.548999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.250 [2024-10-29 11:04:35.549008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.250 [2024-10-29 11:04:35.549429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.250 [2024-10-29 11:04:35.583452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.250 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:30.250 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:15:30.250 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:30.250 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.250 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:30.250 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.250 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:30.250 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:30.251 [2024-10-29 11:04:35.682586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:30.251 Malloc0 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:30.251 [2024-10-29 11:04:35.722433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=86011 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=86012 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=86013 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:30.251 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 86011 00:15:30.520 [2024-10-29 11:04:35.916893] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:30.520 [2024-10-29 11:04:35.917093] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:30.520 [2024-10-29 11:04:35.926890] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:31.457 Initializing NVMe Controllers 00:15:31.457 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:31.457 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:31.457 Initialization complete. Launching workers. 00:15:31.457 ======================================================== 00:15:31.457 Latency(us) 00:15:31.457 Device Information : IOPS MiB/s Average min max 00:15:31.457 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3418.00 13.35 292.23 195.64 639.76 00:15:31.457 ======================================================== 00:15:31.457 Total : 3418.00 13.35 292.23 195.64 639.76 00:15:31.457 00:15:31.457 Initializing NVMe Controllers 00:15:31.457 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:31.457 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:31.457 Initialization complete. Launching workers. 00:15:31.457 ======================================================== 00:15:31.457 Latency(us) 00:15:31.457 Device Information : IOPS MiB/s Average min max 00:15:31.457 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3436.00 13.42 290.74 195.68 550.62 00:15:31.457 ======================================================== 00:15:31.457 Total : 3436.00 13.42 290.74 195.68 550.62 00:15:31.457 00:15:31.457 Initializing NVMe Controllers 00:15:31.457 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:31.457 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:31.457 Initialization complete. Launching workers. 00:15:31.457 ======================================================== 00:15:31.457 Latency(us) 00:15:31.457 Device Information : IOPS MiB/s Average min max 00:15:31.457 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3460.00 13.52 288.64 117.09 490.72 00:15:31.457 ======================================================== 00:15:31.457 Total : 3460.00 13.52 288.64 117.09 490.72 00:15:31.457 00:15:31.717 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 86012 00:15:31.717 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 86013 00:15:31.717 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:31.717 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:31.717 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:31.717 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:31.717 rmmod nvme_tcp 00:15:31.717 rmmod nvme_fabrics 00:15:31.717 rmmod nvme_keyring 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85986 ']' 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85986 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 85986 ']' 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 85986 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85986 00:15:31.717 killing process with pid 85986 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85986' 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 85986 00:15:31.717 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 85986 00:15:32.011 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:32.011 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.012 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:32.271 ************************************ 00:15:32.271 END TEST nvmf_control_msg_list 00:15:32.271 ************************************ 00:15:32.271 00:15:32.271 real 0m2.806s 00:15:32.271 user 0m4.788s 00:15:32.271 sys 0m1.278s 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.271 ************************************ 00:15:32.271 START TEST nvmf_wait_for_buf 00:15:32.271 ************************************ 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:32.271 * Looking for test storage... 00:15:32.271 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:15:32.271 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.530 --rc genhtml_branch_coverage=1 00:15:32.530 --rc genhtml_function_coverage=1 00:15:32.530 --rc genhtml_legend=1 00:15:32.530 --rc geninfo_all_blocks=1 00:15:32.530 --rc geninfo_unexecuted_blocks=1 00:15:32.530 00:15:32.530 ' 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.530 --rc genhtml_branch_coverage=1 00:15:32.530 --rc genhtml_function_coverage=1 00:15:32.530 --rc genhtml_legend=1 00:15:32.530 --rc geninfo_all_blocks=1 00:15:32.530 --rc geninfo_unexecuted_blocks=1 00:15:32.530 00:15:32.530 ' 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.530 --rc genhtml_branch_coverage=1 00:15:32.530 --rc genhtml_function_coverage=1 00:15:32.530 --rc genhtml_legend=1 00:15:32.530 --rc geninfo_all_blocks=1 00:15:32.530 --rc geninfo_unexecuted_blocks=1 00:15:32.530 00:15:32.530 ' 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:32.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.530 --rc genhtml_branch_coverage=1 00:15:32.530 --rc genhtml_function_coverage=1 00:15:32.530 --rc genhtml_legend=1 00:15:32.530 --rc geninfo_all_blocks=1 00:15:32.530 --rc geninfo_unexecuted_blocks=1 00:15:32.530 00:15:32.530 ' 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.530 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.531 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:32.531 Cannot find device "nvmf_init_br" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:32.531 Cannot find device "nvmf_init_br2" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:32.531 Cannot find device "nvmf_tgt_br" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.531 Cannot find device "nvmf_tgt_br2" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:32.531 Cannot find device "nvmf_init_br" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:32.531 Cannot find device "nvmf_init_br2" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:32.531 Cannot find device "nvmf_tgt_br" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:32.531 Cannot find device "nvmf_tgt_br2" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:32.531 Cannot find device "nvmf_br" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:32.531 Cannot find device "nvmf_init_if" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:32.531 Cannot find device "nvmf_init_if2" 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:32.531 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:32.531 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:32.790 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:32.790 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:15:32.790 00:15:32.790 --- 10.0.0.3 ping statistics --- 00:15:32.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.790 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:32.790 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:32.790 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:15:32.790 00:15:32.790 --- 10.0.0.4 ping statistics --- 00:15:32.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.790 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:32.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:32.790 00:15:32.790 --- 10.0.0.1 ping statistics --- 00:15:32.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.790 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:32.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:15:32.790 00:15:32.790 --- 10.0.0.2 ping statistics --- 00:15:32.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.790 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=86249 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 86249 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 86249 ']' 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:32.790 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.050 [2024-10-29 11:04:38.339897] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:33.050 [2024-10-29 11:04:38.339993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.050 [2024-10-29 11:04:38.495235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.050 [2024-10-29 11:04:38.518131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.050 [2024-10-29 11:04:38.518196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.050 [2024-10-29 11:04:38.518224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.050 [2024-10-29 11:04:38.518235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.050 [2024-10-29 11:04:38.518244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.050 [2024-10-29 11:04:38.518634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 [2024-10-29 11:04:38.651431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 Malloc0 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 [2024-10-29 11:04:38.697689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.310 [2024-10-29 11:04:38.721813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.310 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:33.569 [2024-10-29 11:04:38.925584] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:34.947 Initializing NVMe Controllers 00:15:34.947 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:34.947 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:34.947 Initialization complete. Launching workers. 00:15:34.947 ======================================================== 00:15:34.947 Latency(us) 00:15:34.947 Device Information : IOPS MiB/s Average min max 00:15:34.947 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.93 62.49 8001.30 6978.56 8997.00 00:15:34.947 ======================================================== 00:15:34.947 Total : 499.93 62.49 8001.30 6978.56 8997.00 00:15:34.947 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:34.947 rmmod nvme_tcp 00:15:34.947 rmmod nvme_fabrics 00:15:34.947 rmmod nvme_keyring 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 86249 ']' 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 86249 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 86249 ']' 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 86249 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86249 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:34.947 killing process with pid 86249 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86249' 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 86249 00:15:34.947 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 86249 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:35.207 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:35.467 ************************************ 00:15:35.467 END TEST nvmf_wait_for_buf 00:15:35.467 ************************************ 00:15:35.467 00:15:35.467 real 0m3.251s 00:15:35.467 user 0m2.563s 00:15:35.467 sys 0m0.780s 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:35.467 ************************************ 00:15:35.467 START TEST nvmf_fuzz 00:15:35.467 ************************************ 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:35.467 * Looking for test storage... 00:15:35.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:35.467 11:04:40 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.727 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:35.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.728 --rc genhtml_branch_coverage=1 00:15:35.728 --rc genhtml_function_coverage=1 00:15:35.728 --rc genhtml_legend=1 00:15:35.728 --rc geninfo_all_blocks=1 00:15:35.728 --rc geninfo_unexecuted_blocks=1 00:15:35.728 00:15:35.728 ' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:35.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.728 --rc genhtml_branch_coverage=1 00:15:35.728 --rc genhtml_function_coverage=1 00:15:35.728 --rc genhtml_legend=1 00:15:35.728 --rc geninfo_all_blocks=1 00:15:35.728 --rc geninfo_unexecuted_blocks=1 00:15:35.728 00:15:35.728 ' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:35.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.728 --rc genhtml_branch_coverage=1 00:15:35.728 --rc genhtml_function_coverage=1 00:15:35.728 --rc genhtml_legend=1 00:15:35.728 --rc geninfo_all_blocks=1 00:15:35.728 --rc geninfo_unexecuted_blocks=1 00:15:35.728 00:15:35.728 ' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:35.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.728 --rc genhtml_branch_coverage=1 00:15:35.728 --rc genhtml_function_coverage=1 00:15:35.728 --rc genhtml_legend=1 00:15:35.728 --rc geninfo_all_blocks=1 00:15:35.728 --rc geninfo_unexecuted_blocks=1 00:15:35.728 00:15:35.728 ' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:35.728 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:35.728 Cannot find device "nvmf_init_br" 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:35.728 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:35.729 Cannot find device "nvmf_init_br2" 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:35.729 Cannot find device "nvmf_tgt_br" 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.729 Cannot find device "nvmf_tgt_br2" 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:35.729 Cannot find device "nvmf_init_br" 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:35.729 Cannot find device "nvmf_init_br2" 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:35.729 Cannot find device "nvmf_tgt_br" 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:35.729 Cannot find device "nvmf_tgt_br2" 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:35.729 Cannot find device "nvmf_br" 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:35.729 Cannot find device "nvmf_init_if" 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:35.729 Cannot find device "nvmf_init_if2" 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:15:35.729 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:35.988 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.988 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:35.988 00:15:35.988 --- 10.0.0.3 ping statistics --- 00:15:35.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.988 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:35.988 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:35.988 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:35.988 00:15:35.988 --- 10.0.0.4 ping statistics --- 00:15:35.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.988 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:35.988 00:15:35.988 --- 10.0.0.1 ping statistics --- 00:15:35.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.988 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:35.988 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:36.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:15:36.247 00:15:36.247 --- 10.0.0.2 ping statistics --- 00:15:36.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.247 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=86500 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 86500 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # '[' -z 86500 ']' 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:36.247 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.506 Malloc0 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.506 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:36.507 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.507 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.507 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.507 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:15:36.507 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:15:36.766 Shutting down the fuzz application 00:15:36.766 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:37.025 Shutting down the fuzz application 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:37.025 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:37.025 rmmod nvme_tcp 00:15:37.025 rmmod nvme_fabrics 00:15:37.025 rmmod nvme_keyring 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 86500 ']' 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 86500 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' -z 86500 ']' 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # kill -0 86500 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # uname 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86500 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:37.284 killing process with pid 86500 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86500' 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@971 -- # kill 86500 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@976 -- # wait 86500 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:37.284 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:37.543 00:15:37.543 real 0m2.127s 00:15:37.543 user 0m1.722s 00:15:37.543 sys 0m0.673s 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:37.543 ************************************ 00:15:37.543 END TEST nvmf_fuzz 00:15:37.543 ************************************ 00:15:37.543 11:04:42 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:37.543 11:04:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:37.543 11:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:37.543 11:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:37.543 11:04:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.543 ************************************ 00:15:37.543 START TEST nvmf_multiconnection 00:15:37.543 ************************************ 00:15:37.543 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:37.803 * Looking for test storage... 00:15:37.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.803 --rc genhtml_branch_coverage=1 00:15:37.803 --rc genhtml_function_coverage=1 00:15:37.803 --rc genhtml_legend=1 00:15:37.803 --rc geninfo_all_blocks=1 00:15:37.803 --rc geninfo_unexecuted_blocks=1 00:15:37.803 00:15:37.803 ' 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.803 --rc genhtml_branch_coverage=1 00:15:37.803 --rc genhtml_function_coverage=1 00:15:37.803 --rc genhtml_legend=1 00:15:37.803 --rc geninfo_all_blocks=1 00:15:37.803 --rc geninfo_unexecuted_blocks=1 00:15:37.803 00:15:37.803 ' 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.803 --rc genhtml_branch_coverage=1 00:15:37.803 --rc genhtml_function_coverage=1 00:15:37.803 --rc genhtml_legend=1 00:15:37.803 --rc geninfo_all_blocks=1 00:15:37.803 --rc geninfo_unexecuted_blocks=1 00:15:37.803 00:15:37.803 ' 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:37.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.803 --rc genhtml_branch_coverage=1 00:15:37.803 --rc genhtml_function_coverage=1 00:15:37.803 --rc genhtml_legend=1 00:15:37.803 --rc geninfo_all_blocks=1 00:15:37.803 --rc geninfo_unexecuted_blocks=1 00:15:37.803 00:15:37.803 ' 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.803 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.804 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:37.804 Cannot find device "nvmf_init_br" 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:37.804 Cannot find device "nvmf_init_br2" 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:37.804 Cannot find device "nvmf_tgt_br" 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:15:37.804 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.063 Cannot find device "nvmf_tgt_br2" 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:38.063 Cannot find device "nvmf_init_br" 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:38.063 Cannot find device "nvmf_init_br2" 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:38.063 Cannot find device "nvmf_tgt_br" 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:38.063 Cannot find device "nvmf_tgt_br2" 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:38.063 Cannot find device "nvmf_br" 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:38.063 Cannot find device "nvmf_init_if" 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:38.063 Cannot find device "nvmf_init_if2" 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:38.063 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:38.323 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:38.323 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:38.323 00:15:38.323 --- 10.0.0.3 ping statistics --- 00:15:38.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.323 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:38.323 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:38.323 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:15:38.323 00:15:38.323 --- 10.0.0.4 ping statistics --- 00:15:38.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.323 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:38.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:38.323 00:15:38.323 --- 10.0.0.1 ping statistics --- 00:15:38.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.323 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:38.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:38.323 00:15:38.323 --- 10.0.0.2 ping statistics --- 00:15:38.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.323 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=86737 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 86737 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # '[' -z 86737 ']' 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:38.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:38.323 11:04:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.323 [2024-10-29 11:04:43.757357] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:15:38.323 [2024-10-29 11:04:43.757471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.582 [2024-10-29 11:04:43.913177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.582 [2024-10-29 11:04:43.941194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.582 [2024-10-29 11:04:43.941252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.582 [2024-10-29 11:04:43.941265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.582 [2024-10-29 11:04:43.941275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.582 [2024-10-29 11:04:43.941284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.582 [2024-10-29 11:04:43.942266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.582 [2024-10-29 11:04:43.942562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.582 [2024-10-29 11:04:43.943006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.582 [2024-10-29 11:04:43.943047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.582 [2024-10-29 11:04:43.980652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:38.582 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:38.582 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@866 -- # return 0 00:15:38.582 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:38.582 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:38.582 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.582 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.582 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.582 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.582 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.582 [2024-10-29 11:04:44.074511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.863 Malloc1 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.863 [2024-10-29 11:04:44.133264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.863 Malloc2 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.863 Malloc3 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.863 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 Malloc4 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 Malloc5 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 Malloc6 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.864 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 Malloc7 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 Malloc8 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:39.130 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 Malloc9 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 Malloc10 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 Malloc11 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:39.131 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:39.390 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:15:39.390 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:15:39.390 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:39.390 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:39.390 11:04:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:15:41.289 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:41.289 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:41.289 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK1 00:15:41.289 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:41.289 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:41.289 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:15:41.289 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:41.289 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:15:41.548 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:15:41.548 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:15:41.548 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.548 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:41.548 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:15:43.451 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:43.451 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK2 00:15:43.451 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:43.451 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:43.451 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.451 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:15:43.451 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:43.451 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:15:43.709 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:15:43.709 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:15:43.709 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.709 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:43.709 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:15:45.610 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:45.610 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:45.610 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK3 00:15:45.610 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:45.610 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:45.610 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:15:45.610 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:45.610 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:15:45.869 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:15:45.869 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:15:45.869 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.869 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:45.869 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:15:47.772 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:47.772 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:47.772 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK4 00:15:47.772 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:47.772 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.772 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:15:47.772 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:47.772 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:15:48.030 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:15:48.030 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:15:48.030 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.030 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:48.030 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:15:49.932 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:49.932 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK5 00:15:49.932 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:49.932 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:49.933 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:49.933 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:15:49.933 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:49.933 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:15:50.192 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:15:50.192 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:15:50.192 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:50.192 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:50.192 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:15:52.095 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:52.095 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:52.095 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK6 00:15:52.095 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:52.095 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:52.095 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:15:52.095 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:52.095 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:15:52.354 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:15:52.354 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:15:52.354 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.354 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:52.354 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:15:54.258 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:54.258 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:54.258 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK7 00:15:54.258 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:54.258 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.258 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:15:54.258 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:54.258 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:15:54.521 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:15:54.521 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:15:54.521 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.521 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:54.521 11:04:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:15:56.425 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:56.425 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:56.425 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK8 00:15:56.425 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:56.425 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.425 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:15:56.425 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:56.425 11:05:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:15:56.684 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:15:56.684 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:15:56.684 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.684 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:56.684 11:05:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:15:58.589 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:58.589 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:58.589 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK9 00:15:58.589 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:58.589 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.589 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:15:58.589 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.589 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:15:58.848 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:15:58.849 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:15:58.849 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:58.849 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:58.849 11:05:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:16:00.752 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:00.752 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:00.752 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK10 00:16:01.011 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:01.011 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.011 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:16:01.011 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.011 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:01.011 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:01.011 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # local i=0 00:16:01.011 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.011 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:01.011 11:05:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # sleep 2 00:16:02.916 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:02.916 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # grep -c SPDK11 00:16:02.916 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:03.175 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:03.175 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.175 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # return 0 00:16:03.175 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:03.175 [global] 00:16:03.175 thread=1 00:16:03.175 invalidate=1 00:16:03.175 rw=read 00:16:03.175 time_based=1 00:16:03.175 runtime=10 00:16:03.175 ioengine=libaio 00:16:03.175 direct=1 00:16:03.175 bs=262144 00:16:03.175 iodepth=64 00:16:03.175 norandommap=1 00:16:03.175 numjobs=1 00:16:03.175 00:16:03.175 [job0] 00:16:03.175 filename=/dev/nvme0n1 00:16:03.175 [job1] 00:16:03.175 filename=/dev/nvme10n1 00:16:03.175 [job2] 00:16:03.175 filename=/dev/nvme1n1 00:16:03.175 [job3] 00:16:03.175 filename=/dev/nvme2n1 00:16:03.175 [job4] 00:16:03.175 filename=/dev/nvme3n1 00:16:03.175 [job5] 00:16:03.175 filename=/dev/nvme4n1 00:16:03.175 [job6] 00:16:03.175 filename=/dev/nvme5n1 00:16:03.175 [job7] 00:16:03.175 filename=/dev/nvme6n1 00:16:03.175 [job8] 00:16:03.175 filename=/dev/nvme7n1 00:16:03.175 [job9] 00:16:03.175 filename=/dev/nvme8n1 00:16:03.175 [job10] 00:16:03.175 filename=/dev/nvme9n1 00:16:03.175 Could not set queue depth (nvme0n1) 00:16:03.175 Could not set queue depth (nvme10n1) 00:16:03.175 Could not set queue depth (nvme1n1) 00:16:03.175 Could not set queue depth (nvme2n1) 00:16:03.175 Could not set queue depth (nvme3n1) 00:16:03.175 Could not set queue depth (nvme4n1) 00:16:03.175 Could not set queue depth (nvme5n1) 00:16:03.175 Could not set queue depth (nvme6n1) 00:16:03.175 Could not set queue depth (nvme7n1) 00:16:03.175 Could not set queue depth (nvme8n1) 00:16:03.175 Could not set queue depth (nvme9n1) 00:16:03.434 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.434 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.434 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.434 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.434 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.434 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.434 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.435 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.435 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.435 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.435 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:03.435 fio-3.35 00:16:03.435 Starting 11 threads 00:16:15.657 00:16:15.657 job0: (groupid=0, jobs=1): err= 0: pid=87192: Tue Oct 29 11:05:19 2024 00:16:15.657 read: IOPS=437, BW=109MiB/s (115MB/s)(1110MiB/10146msec) 00:16:15.657 slat (usec): min=19, max=410651, avg=2142.56, stdev=11875.53 00:16:15.657 clat (usec): min=1169, max=1058.3k, avg=144017.85, stdev=163646.40 00:16:15.657 lat (usec): min=1241, max=1058.4k, avg=146160.41, stdev=166090.39 00:16:15.657 clat percentiles (usec): 00:16:15.657 | 1.00th=[ 1975], 5.00th=[ 10159], 10.00th=[ 41157], 00:16:15.657 | 20.00th=[ 70779], 30.00th=[ 72877], 40.00th=[ 74974], 00:16:15.657 | 50.00th=[ 76022], 60.00th=[ 79168], 70.00th=[ 83362], 00:16:15.657 | 80.00th=[ 252707], 90.00th=[ 346031], 95.00th=[ 557843], 00:16:15.657 | 99.00th=[ 759170], 99.50th=[ 868221], 99.90th=[ 876610], 00:16:15.657 | 99.95th=[ 876610], 99.99th=[1061159] 00:16:15.657 bw ( KiB/s): min=10752, max=221764, per=18.78%, avg=111952.20, stdev=88290.37, samples=20 00:16:15.657 iops : min= 42, max= 866, avg=437.30, stdev=344.87, samples=20 00:16:15.657 lat (msec) : 2=1.04%, 4=3.06%, 10=0.43%, 20=0.97%, 50=6.35% 00:16:15.657 lat (msec) : 100=65.03%, 250=2.77%, 500=14.71%, 750=4.62%, 1000=0.99% 00:16:15.657 lat (msec) : 2000=0.02% 00:16:15.657 cpu : usr=0.35%, sys=1.92%, ctx=1396, majf=0, minf=4097 00:16:15.657 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:15.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.657 issued rwts: total=4438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.657 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.657 job1: (groupid=0, jobs=1): err= 0: pid=87193: Tue Oct 29 11:05:19 2024 00:16:15.657 read: IOPS=142, BW=35.6MiB/s (37.3MB/s)(362MiB/10174msec) 00:16:15.657 slat (usec): min=19, max=137727, avg=6912.27, stdev=18840.46 00:16:15.657 clat (msec): min=61, max=665, avg=441.72, stdev=100.23 00:16:15.657 lat (msec): min=61, max=700, avg=448.63, stdev=101.48 00:16:15.657 clat percentiles (msec): 00:16:15.657 | 1.00th=[ 77], 5.00th=[ 228], 10.00th=[ 300], 20.00th=[ 388], 00:16:15.657 | 30.00th=[ 418], 40.00th=[ 435], 50.00th=[ 451], 60.00th=[ 472], 00:16:15.657 | 70.00th=[ 493], 80.00th=[ 518], 90.00th=[ 558], 95.00th=[ 567], 00:16:15.657 | 99.00th=[ 634], 99.50th=[ 642], 99.90th=[ 651], 99.95th=[ 667], 00:16:15.657 | 99.99th=[ 667] 00:16:15.657 bw ( KiB/s): min=27648, max=50075, per=5.94%, avg=35444.45, stdev=5132.76, samples=20 00:16:15.657 iops : min= 108, max= 195, avg=138.30, stdev=20.04, samples=20 00:16:15.657 lat (msec) : 100=1.10%, 250=5.04%, 500=67.29%, 750=26.57% 00:16:15.658 cpu : usr=0.05%, sys=0.70%, ctx=296, majf=0, minf=4097 00:16:15.658 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.7% 00:16:15.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.658 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.658 issued rwts: total=1449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.658 job2: (groupid=0, jobs=1): err= 0: pid=87194: Tue Oct 29 11:05:19 2024 00:16:15.658 read: IOPS=126, BW=31.5MiB/s (33.1MB/s)(321MiB/10176msec) 00:16:15.658 slat (usec): min=13, max=327279, avg=7214.74, stdev=21001.30 00:16:15.658 clat (msec): min=13, max=861, avg=499.39, stdev=117.43 00:16:15.658 lat (msec): min=13, max=864, avg=506.61, stdev=117.98 00:16:15.658 clat percentiles (msec): 00:16:15.658 | 1.00th=[ 52], 5.00th=[ 372], 10.00th=[ 405], 20.00th=[ 430], 00:16:15.658 | 30.00th=[ 447], 40.00th=[ 464], 50.00th=[ 485], 60.00th=[ 518], 00:16:15.658 | 70.00th=[ 542], 80.00th=[ 567], 90.00th=[ 651], 95.00th=[ 735], 00:16:15.658 | 99.00th=[ 818], 99.50th=[ 835], 99.90th=[ 860], 99.95th=[ 860], 00:16:15.658 | 99.99th=[ 860] 00:16:15.658 bw ( KiB/s): min=11264, max=43008, per=5.25%, avg=31273.75, stdev=7827.59, samples=20 00:16:15.658 iops : min= 44, max= 168, avg=122.00, stdev=30.56, samples=20 00:16:15.658 lat (msec) : 20=0.16%, 50=0.55%, 100=0.93%, 250=0.93%, 500=53.35% 00:16:15.658 lat (msec) : 750=39.88%, 1000=4.21% 00:16:15.658 cpu : usr=0.03%, sys=0.76%, ctx=267, majf=0, minf=4097 00:16:15.658 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:16:15.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.658 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.658 issued rwts: total=1284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.658 job3: (groupid=0, jobs=1): err= 0: pid=87195: Tue Oct 29 11:05:19 2024 00:16:15.658 read: IOPS=122, BW=30.7MiB/s (32.2MB/s)(313MiB/10178msec) 00:16:15.658 slat (usec): min=16, max=457028, avg=8030.43, stdev=25107.34 00:16:15.658 clat (msec): min=123, max=878, avg=512.06, stdev=145.42 00:16:15.658 lat (msec): min=123, max=934, avg=520.09, stdev=146.30 00:16:15.658 clat percentiles (msec): 00:16:15.658 | 1.00th=[ 148], 5.00th=[ 253], 10.00th=[ 388], 20.00th=[ 422], 00:16:15.658 | 30.00th=[ 443], 40.00th=[ 464], 50.00th=[ 493], 60.00th=[ 518], 00:16:15.658 | 70.00th=[ 558], 80.00th=[ 609], 90.00th=[ 709], 95.00th=[ 810], 00:16:15.658 | 99.00th=[ 877], 99.50th=[ 877], 99.90th=[ 877], 99.95th=[ 877], 00:16:15.658 | 99.99th=[ 877] 00:16:15.658 bw ( KiB/s): min=17408, max=38989, per=5.10%, avg=30405.25, stdev=6396.96, samples=20 00:16:15.658 iops : min= 68, max= 152, avg=118.60, stdev=24.98, samples=20 00:16:15.658 lat (msec) : 250=4.88%, 500=48.12%, 750=37.65%, 1000=9.35% 00:16:15.658 cpu : usr=0.06%, sys=0.62%, ctx=223, majf=0, minf=4097 00:16:15.658 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=95.0% 00:16:15.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.658 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.658 issued rwts: total=1251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.658 job4: (groupid=0, jobs=1): err= 0: pid=87196: Tue Oct 29 11:05:19 2024 00:16:15.658 read: IOPS=136, BW=34.0MiB/s (35.7MB/s)(346MiB/10172msec) 00:16:15.658 slat (usec): min=20, max=217913, avg=7216.89, stdev=22277.86 00:16:15.658 clat (msec): min=33, max=760, avg=462.18, stdev=113.42 00:16:15.658 lat (msec): min=33, max=760, avg=469.39, stdev=114.18 00:16:15.658 clat percentiles (msec): 00:16:15.658 | 1.00th=[ 37], 5.00th=[ 296], 10.00th=[ 338], 20.00th=[ 384], 00:16:15.658 | 30.00th=[ 414], 40.00th=[ 435], 50.00th=[ 451], 60.00th=[ 481], 00:16:15.658 | 70.00th=[ 523], 80.00th=[ 558], 90.00th=[ 609], 95.00th=[ 642], 00:16:15.658 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 743], 99.95th=[ 760], 00:16:15.658 | 99.99th=[ 760] 00:16:15.658 bw ( KiB/s): min=18395, max=45056, per=5.68%, avg=33858.45, stdev=6712.54, samples=20 00:16:15.658 iops : min= 71, max= 176, avg=132.10, stdev=26.30, samples=20 00:16:15.658 lat (msec) : 50=1.66%, 250=1.08%, 500=62.89%, 750=34.30%, 1000=0.07% 00:16:15.658 cpu : usr=0.08%, sys=0.65%, ctx=239, majf=0, minf=4097 00:16:15.658 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.5% 00:16:15.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.658 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.658 issued rwts: total=1385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.658 job5: (groupid=0, jobs=1): err= 0: pid=87197: Tue Oct 29 11:05:19 2024 00:16:15.658 read: IOPS=138, BW=34.7MiB/s (36.3MB/s)(353MiB/10176msec) 00:16:15.658 slat (usec): min=20, max=182153, avg=7093.56, stdev=19614.29 00:16:15.658 clat (msec): min=19, max=736, avg=453.88, stdev=121.69 00:16:15.658 lat (msec): min=20, max=736, avg=460.98, stdev=122.78 00:16:15.658 clat percentiles (msec): 00:16:15.658 | 1.00th=[ 93], 5.00th=[ 199], 10.00th=[ 317], 20.00th=[ 397], 00:16:15.658 | 30.00th=[ 418], 40.00th=[ 435], 50.00th=[ 451], 60.00th=[ 472], 00:16:15.658 | 70.00th=[ 502], 80.00th=[ 550], 90.00th=[ 609], 95.00th=[ 651], 00:16:15.658 | 99.00th=[ 709], 99.50th=[ 709], 99.90th=[ 735], 99.95th=[ 735], 00:16:15.658 | 99.99th=[ 735] 00:16:15.658 bw ( KiB/s): min=17408, max=64000, per=5.79%, avg=34498.30, stdev=9441.35, samples=20 00:16:15.658 iops : min= 68, max= 250, avg=134.60, stdev=36.93, samples=20 00:16:15.658 lat (msec) : 20=0.07%, 100=1.49%, 250=4.82%, 500=63.08%, 750=30.55% 00:16:15.658 cpu : usr=0.08%, sys=0.68%, ctx=262, majf=0, minf=4097 00:16:15.658 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:16:15.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.658 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.658 issued rwts: total=1411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.658 job6: (groupid=0, jobs=1): err= 0: pid=87198: Tue Oct 29 11:05:19 2024 00:16:15.658 read: IOPS=125, BW=31.3MiB/s (32.8MB/s)(318MiB/10174msec) 00:16:15.658 slat (usec): min=20, max=387393, avg=7861.68, stdev=22780.40 00:16:15.658 clat (msec): min=79, max=952, avg=503.05, stdev=128.53 00:16:15.658 lat (msec): min=80, max=952, avg=510.91, stdev=129.86 00:16:15.658 clat percentiles (msec): 00:16:15.658 | 1.00th=[ 88], 5.00th=[ 292], 10.00th=[ 384], 20.00th=[ 422], 00:16:15.658 | 30.00th=[ 439], 40.00th=[ 456], 50.00th=[ 477], 60.00th=[ 518], 00:16:15.658 | 70.00th=[ 575], 80.00th=[ 609], 90.00th=[ 659], 95.00th=[ 735], 00:16:15.658 | 99.00th=[ 827], 99.50th=[ 860], 99.90th=[ 877], 99.95th=[ 953], 00:16:15.658 | 99.99th=[ 953] 00:16:15.658 bw ( KiB/s): min=17955, max=40960, per=5.19%, avg=30943.20, stdev=7058.70, samples=20 00:16:15.658 iops : min= 70, max= 160, avg=120.75, stdev=27.61, samples=20 00:16:15.658 lat (msec) : 100=1.34%, 250=2.04%, 500=53.10%, 750=39.20%, 1000=4.32% 00:16:15.658 cpu : usr=0.11%, sys=0.57%, ctx=238, majf=0, minf=4097 00:16:15.658 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.1% 00:16:15.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.658 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.658 issued rwts: total=1273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.658 job7: (groupid=0, jobs=1): err= 0: pid=87199: Tue Oct 29 11:05:19 2024 00:16:15.658 read: IOPS=122, BW=30.5MiB/s (32.0MB/s)(311MiB/10171msec) 00:16:15.658 slat (usec): min=20, max=279429, avg=8066.65, stdev=23395.17 00:16:15.658 clat (msec): min=145, max=929, avg=515.25, stdev=128.46 00:16:15.658 lat (msec): min=196, max=929, avg=523.32, stdev=129.77 00:16:15.658 clat percentiles (msec): 00:16:15.658 | 1.00th=[ 275], 5.00th=[ 359], 10.00th=[ 388], 20.00th=[ 422], 00:16:15.658 | 30.00th=[ 435], 40.00th=[ 451], 50.00th=[ 477], 60.00th=[ 514], 00:16:15.658 | 70.00th=[ 575], 80.00th=[ 609], 90.00th=[ 709], 95.00th=[ 768], 00:16:15.658 | 99.00th=[ 894], 99.50th=[ 894], 99.90th=[ 927], 99.95th=[ 927], 00:16:15.658 | 99.99th=[ 927] 00:16:15.658 bw ( KiB/s): min=12800, max=41984, per=5.06%, avg=30156.80, stdev=8370.11, samples=20 00:16:15.658 iops : min= 50, max= 164, avg=117.80, stdev=32.70, samples=20 00:16:15.658 lat (msec) : 250=0.89%, 500=55.07%, 750=38.41%, 1000=5.64% 00:16:15.658 cpu : usr=0.06%, sys=0.58%, ctx=226, majf=0, minf=4097 00:16:15.658 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:16:15.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.658 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.658 issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.658 job8: (groupid=0, jobs=1): err= 0: pid=87200: Tue Oct 29 11:05:19 2024 00:16:15.658 read: IOPS=337, BW=84.3MiB/s (88.4MB/s)(858MiB/10177msec) 00:16:15.658 slat (usec): min=11, max=510235, avg=2918.65, stdev=14787.37 00:16:15.658 clat (msec): min=13, max=736, avg=186.61, stdev=216.17 00:16:15.658 lat (msec): min=13, max=801, avg=189.53, stdev=219.26 00:16:15.658 clat percentiles (msec): 00:16:15.658 | 1.00th=[ 30], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 36], 00:16:15.658 | 30.00th=[ 37], 40.00th=[ 42], 50.00th=[ 46], 60.00th=[ 49], 00:16:15.658 | 70.00th=[ 384], 80.00th=[ 443], 90.00th=[ 523], 95.00th=[ 592], 00:16:15.658 | 99.00th=[ 693], 99.50th=[ 735], 99.90th=[ 735], 99.95th=[ 735], 00:16:15.658 | 99.99th=[ 735] 00:16:15.658 bw ( KiB/s): min= 2560, max=399872, per=14.47%, avg=86262.05, stdev=132353.64, samples=20 00:16:15.658 iops : min= 10, max= 1562, avg=336.80, stdev=517.08, samples=20 00:16:15.658 lat (msec) : 20=0.35%, 50=61.84%, 100=4.84%, 250=0.44%, 500=20.19% 00:16:15.658 lat (msec) : 750=12.35% 00:16:15.658 cpu : usr=0.11%, sys=1.14%, ctx=417, majf=0, minf=4097 00:16:15.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:15.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.659 issued rwts: total=3433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.659 job9: (groupid=0, jobs=1): err= 0: pid=87201: Tue Oct 29 11:05:19 2024 00:16:15.659 read: IOPS=323, BW=80.8MiB/s (84.7MB/s)(820MiB/10147msec) 00:16:15.659 slat (usec): min=20, max=92857, avg=3046.12, stdev=7430.05 00:16:15.659 clat (msec): min=12, max=410, avg=194.81, stdev=57.70 00:16:15.659 lat (msec): min=13, max=410, avg=197.85, stdev=58.54 00:16:15.659 clat percentiles (msec): 00:16:15.659 | 1.00th=[ 117], 5.00th=[ 140], 10.00th=[ 146], 20.00th=[ 155], 00:16:15.659 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:16:15.659 | 70.00th=[ 194], 80.00th=[ 259], 90.00th=[ 296], 95.00th=[ 309], 00:16:15.659 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 388], 99.95th=[ 388], 00:16:15.659 | 99.99th=[ 409] 00:16:15.659 bw ( KiB/s): min=51712, max=114176, per=13.81%, avg=82338.50, stdev=20393.08, samples=20 00:16:15.659 iops : min= 202, max= 446, avg=321.60, stdev=79.65, samples=20 00:16:15.659 lat (msec) : 20=0.06%, 50=0.88%, 250=78.10%, 500=20.95% 00:16:15.659 cpu : usr=0.28%, sys=1.44%, ctx=681, majf=0, minf=4097 00:16:15.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:16:15.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.659 issued rwts: total=3279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.659 job10: (groupid=0, jobs=1): err= 0: pid=87202: Tue Oct 29 11:05:19 2024 00:16:15.659 read: IOPS=321, BW=80.3MiB/s (84.2MB/s)(815MiB/10144msec) 00:16:15.659 slat (usec): min=20, max=66096, avg=3065.62, stdev=7461.60 00:16:15.659 clat (msec): min=18, max=454, avg=195.86, stdev=62.49 00:16:15.659 lat (msec): min=22, max=454, avg=198.93, stdev=63.30 00:16:15.659 clat percentiles (msec): 00:16:15.659 | 1.00th=[ 86], 5.00th=[ 136], 10.00th=[ 144], 20.00th=[ 155], 00:16:15.659 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:16:15.659 | 70.00th=[ 197], 80.00th=[ 245], 90.00th=[ 313], 95.00th=[ 330], 00:16:15.659 | 99.00th=[ 359], 99.50th=[ 380], 99.90th=[ 456], 99.95th=[ 456], 00:16:15.659 | 99.99th=[ 456] 00:16:15.659 bw ( KiB/s): min=48640, max=103936, per=13.72%, avg=81817.35, stdev=20491.42, samples=20 00:16:15.659 iops : min= 190, max= 406, avg=319.55, stdev=80.04, samples=20 00:16:15.659 lat (msec) : 20=0.03%, 50=0.09%, 100=1.20%, 250=80.06%, 500=18.62% 00:16:15.659 cpu : usr=0.23%, sys=1.39%, ctx=662, majf=0, minf=4097 00:16:15.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:16:15.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:15.659 issued rwts: total=3260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.659 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.659 00:16:15.659 Run status group 0 (all jobs): 00:16:15.659 READ: bw=582MiB/s (611MB/s), 30.5MiB/s-109MiB/s (32.0MB/s-115MB/s), io=5926MiB (6214MB), run=10144-10178msec 00:16:15.659 00:16:15.659 Disk stats (read/write): 00:16:15.659 nvme0n1: ios=8748/0, merge=0/0, ticks=1222551/0, in_queue=1222551, util=97.76% 00:16:15.659 nvme10n1: ios=2771/0, merge=0/0, ticks=1218646/0, in_queue=1218646, util=97.91% 00:16:15.659 nvme1n1: ios=2454/0, merge=0/0, ticks=1225139/0, in_queue=1225139, util=98.17% 00:16:15.659 nvme2n1: ios=2375/0, merge=0/0, ticks=1220107/0, in_queue=1220107, util=98.25% 00:16:15.659 nvme3n1: ios=2655/0, merge=0/0, ticks=1221402/0, in_queue=1221402, util=98.30% 00:16:15.659 nvme4n1: ios=2694/0, merge=0/0, ticks=1220978/0, in_queue=1220978, util=98.52% 00:16:15.659 nvme5n1: ios=2422/0, merge=0/0, ticks=1217021/0, in_queue=1217021, util=98.62% 00:16:15.659 nvme6n1: ios=2356/0, merge=0/0, ticks=1217998/0, in_queue=1217998, util=98.64% 00:16:15.659 nvme7n1: ios=6738/0, merge=0/0, ticks=1220826/0, in_queue=1220826, util=99.06% 00:16:15.659 nvme8n1: ios=6431/0, merge=0/0, ticks=1226663/0, in_queue=1226663, util=99.05% 00:16:15.659 nvme9n1: ios=6393/0, merge=0/0, ticks=1224446/0, in_queue=1224446, util=99.12% 00:16:15.659 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:15.659 [global] 00:16:15.659 thread=1 00:16:15.659 invalidate=1 00:16:15.659 rw=randwrite 00:16:15.659 time_based=1 00:16:15.659 runtime=10 00:16:15.659 ioengine=libaio 00:16:15.659 direct=1 00:16:15.659 bs=262144 00:16:15.659 iodepth=64 00:16:15.659 norandommap=1 00:16:15.659 numjobs=1 00:16:15.659 00:16:15.659 [job0] 00:16:15.659 filename=/dev/nvme0n1 00:16:15.659 [job1] 00:16:15.659 filename=/dev/nvme10n1 00:16:15.659 [job2] 00:16:15.659 filename=/dev/nvme1n1 00:16:15.659 [job3] 00:16:15.659 filename=/dev/nvme2n1 00:16:15.659 [job4] 00:16:15.659 filename=/dev/nvme3n1 00:16:15.659 [job5] 00:16:15.659 filename=/dev/nvme4n1 00:16:15.659 [job6] 00:16:15.659 filename=/dev/nvme5n1 00:16:15.659 [job7] 00:16:15.659 filename=/dev/nvme6n1 00:16:15.659 [job8] 00:16:15.659 filename=/dev/nvme7n1 00:16:15.659 [job9] 00:16:15.659 filename=/dev/nvme8n1 00:16:15.659 [job10] 00:16:15.659 filename=/dev/nvme9n1 00:16:15.659 Could not set queue depth (nvme0n1) 00:16:15.659 Could not set queue depth (nvme10n1) 00:16:15.659 Could not set queue depth (nvme1n1) 00:16:15.659 Could not set queue depth (nvme2n1) 00:16:15.659 Could not set queue depth (nvme3n1) 00:16:15.659 Could not set queue depth (nvme4n1) 00:16:15.659 Could not set queue depth (nvme5n1) 00:16:15.659 Could not set queue depth (nvme6n1) 00:16:15.659 Could not set queue depth (nvme7n1) 00:16:15.659 Could not set queue depth (nvme8n1) 00:16:15.659 Could not set queue depth (nvme9n1) 00:16:15.659 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:15.659 fio-3.35 00:16:15.659 Starting 11 threads 00:16:25.645 00:16:25.645 job0: (groupid=0, jobs=1): err= 0: pid=87398: Tue Oct 29 11:05:30 2024 00:16:25.645 write: IOPS=142, BW=35.6MiB/s (37.3MB/s)(366MiB/10289msec); 0 zone resets 00:16:25.645 slat (usec): min=19, max=73457, avg=6830.81, stdev=12426.72 00:16:25.645 clat (msec): min=26, max=731, avg=442.41, stdev=65.24 00:16:25.645 lat (msec): min=26, max=731, avg=449.24, stdev=65.28 00:16:25.645 clat percentiles (msec): 00:16:25.645 | 1.00th=[ 114], 5.00th=[ 359], 10.00th=[ 414], 20.00th=[ 430], 00:16:25.645 | 30.00th=[ 439], 40.00th=[ 447], 50.00th=[ 451], 60.00th=[ 460], 00:16:25.645 | 70.00th=[ 460], 80.00th=[ 464], 90.00th=[ 477], 95.00th=[ 493], 00:16:25.645 | 99.00th=[ 617], 99.50th=[ 676], 99.90th=[ 735], 99.95th=[ 735], 00:16:25.645 | 99.99th=[ 735] 00:16:25.645 bw ( KiB/s): min=32768, max=39424, per=4.81%, avg=35854.90, stdev=1617.20, samples=20 00:16:25.645 iops : min= 128, max= 154, avg=139.95, stdev= 6.35, samples=20 00:16:25.645 lat (msec) : 50=0.34%, 100=0.55%, 250=1.91%, 500=94.27%, 750=2.94% 00:16:25.645 cpu : usr=0.26%, sys=0.45%, ctx=693, majf=0, minf=1 00:16:25.645 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:16:25.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.645 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.645 issued rwts: total=0,1465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.645 job1: (groupid=0, jobs=1): err= 0: pid=87399: Tue Oct 29 11:05:30 2024 00:16:25.645 write: IOPS=428, BW=107MiB/s (112MB/s)(1082MiB/10103msec); 0 zone resets 00:16:25.645 slat (usec): min=15, max=93020, avg=2304.72, stdev=4368.10 00:16:25.645 clat (msec): min=95, max=250, avg=147.05, stdev= 9.56 00:16:25.645 lat (msec): min=95, max=250, avg=149.35, stdev= 8.89 00:16:25.645 clat percentiles (msec): 00:16:25.645 | 1.00th=[ 130], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 142], 00:16:25.645 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 148], 00:16:25.645 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 157], 00:16:25.645 | 99.00th=[ 176], 99.50th=[ 205], 99.90th=[ 247], 99.95th=[ 247], 00:16:25.645 | 99.99th=[ 251] 00:16:25.645 bw ( KiB/s): min=92160, max=112640, per=14.64%, avg=109162.30, stdev=4627.52, samples=20 00:16:25.645 iops : min= 360, max= 440, avg=426.40, stdev=18.08, samples=20 00:16:25.645 lat (msec) : 100=0.09%, 250=99.88%, 500=0.02% 00:16:25.645 cpu : usr=0.67%, sys=1.41%, ctx=1206, majf=0, minf=1 00:16:25.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:16:25.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.645 issued rwts: total=0,4328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.645 job2: (groupid=0, jobs=1): err= 0: pid=87410: Tue Oct 29 11:05:30 2024 00:16:25.645 write: IOPS=142, BW=35.6MiB/s (37.3MB/s)(366MiB/10284msec); 0 zone resets 00:16:25.645 slat (usec): min=20, max=99146, avg=6826.92, stdev=12440.28 00:16:25.645 clat (msec): min=38, max=737, avg=442.45, stdev=65.60 00:16:25.645 lat (msec): min=38, max=737, avg=449.27, stdev=65.64 00:16:25.645 clat percentiles (msec): 00:16:25.645 | 1.00th=[ 107], 5.00th=[ 368], 10.00th=[ 418], 20.00th=[ 430], 00:16:25.645 | 30.00th=[ 439], 40.00th=[ 447], 50.00th=[ 451], 60.00th=[ 456], 00:16:25.645 | 70.00th=[ 460], 80.00th=[ 468], 90.00th=[ 477], 95.00th=[ 493], 00:16:25.645 | 99.00th=[ 625], 99.50th=[ 676], 99.90th=[ 735], 99.95th=[ 735], 00:16:25.645 | 99.99th=[ 735] 00:16:25.645 bw ( KiB/s): min=32768, max=38912, per=4.81%, avg=35865.60, stdev=1394.52, samples=20 00:16:25.645 iops : min= 128, max= 152, avg=140.10, stdev= 5.45, samples=20 00:16:25.645 lat (msec) : 50=0.27%, 100=0.55%, 250=1.91%, 500=92.49%, 750=4.78% 00:16:25.645 cpu : usr=0.31%, sys=0.39%, ctx=700, majf=0, minf=1 00:16:25.645 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:16:25.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.645 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.645 issued rwts: total=0,1464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.645 job3: (groupid=0, jobs=1): err= 0: pid=87412: Tue Oct 29 11:05:30 2024 00:16:25.645 write: IOPS=559, BW=140MiB/s (147MB/s)(1412MiB/10095msec); 0 zone resets 00:16:25.645 slat (usec): min=13, max=9771, avg=1765.32, stdev=3020.12 00:16:25.645 clat (msec): min=12, max=205, avg=112.58, stdev= 8.96 00:16:25.645 lat (msec): min=12, max=205, avg=114.35, stdev= 8.57 00:16:25.645 clat percentiles (msec): 00:16:25.645 | 1.00th=[ 97], 5.00th=[ 106], 10.00th=[ 107], 20.00th=[ 109], 00:16:25.645 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 113], 60.00th=[ 115], 00:16:25.645 | 70.00th=[ 115], 80.00th=[ 116], 90.00th=[ 118], 95.00th=[ 118], 00:16:25.645 | 99.00th=[ 122], 99.50th=[ 155], 99.90th=[ 199], 99.95th=[ 199], 00:16:25.645 | 99.99th=[ 205] 00:16:25.645 bw ( KiB/s): min=137728, max=147456, per=19.18%, avg=142976.00, stdev=2389.25, samples=20 00:16:25.645 iops : min= 538, max= 576, avg=558.50, stdev= 9.33, samples=20 00:16:25.645 lat (msec) : 20=0.14%, 50=0.28%, 100=0.71%, 250=98.87% 00:16:25.645 cpu : usr=1.00%, sys=1.57%, ctx=2683, majf=0, minf=1 00:16:25.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:25.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.645 issued rwts: total=0,5648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.645 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.645 job4: (groupid=0, jobs=1): err= 0: pid=87413: Tue Oct 29 11:05:30 2024 00:16:25.645 write: IOPS=136, BW=34.1MiB/s (35.8MB/s)(351MiB/10284msec); 0 zone resets 00:16:25.645 slat (usec): min=16, max=269276, avg=7124.27, stdev=15032.57 00:16:25.645 clat (msec): min=169, max=770, avg=461.38, stdev=74.84 00:16:25.645 lat (msec): min=169, max=770, avg=468.51, stdev=74.82 00:16:25.645 clat percentiles (msec): 00:16:25.645 | 1.00th=[ 239], 5.00th=[ 397], 10.00th=[ 418], 20.00th=[ 430], 00:16:25.645 | 30.00th=[ 439], 40.00th=[ 447], 50.00th=[ 447], 60.00th=[ 456], 00:16:25.646 | 70.00th=[ 460], 80.00th=[ 472], 90.00th=[ 527], 95.00th=[ 676], 00:16:25.646 | 99.00th=[ 718], 99.50th=[ 718], 99.90th=[ 768], 99.95th=[ 768], 00:16:25.646 | 99.99th=[ 768] 00:16:25.646 bw ( KiB/s): min=20480, max=36864, per=4.61%, avg=34329.60, stdev=4094.23, samples=20 00:16:25.646 iops : min= 80, max= 144, avg=134.10, stdev=15.99, samples=20 00:16:25.646 lat (msec) : 250=1.14%, 500=83.12%, 750=15.53%, 1000=0.21% 00:16:25.646 cpu : usr=0.29%, sys=0.39%, ctx=851, majf=0, minf=1 00:16:25.646 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:16:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.646 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.646 issued rwts: total=0,1404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.646 job5: (groupid=0, jobs=1): err= 0: pid=87414: Tue Oct 29 11:05:30 2024 00:16:25.646 write: IOPS=140, BW=35.2MiB/s (36.9MB/s)(362MiB/10292msec); 0 zone resets 00:16:25.646 slat (usec): min=17, max=296786, avg=6915.93, stdev=14247.42 00:16:25.646 clat (msec): min=267, max=730, avg=447.72, stdev=41.57 00:16:25.646 lat (msec): min=297, max=730, avg=454.64, stdev=40.07 00:16:25.646 clat percentiles (msec): 00:16:25.646 | 1.00th=[ 326], 5.00th=[ 409], 10.00th=[ 418], 20.00th=[ 430], 00:16:25.646 | 30.00th=[ 439], 40.00th=[ 443], 50.00th=[ 447], 60.00th=[ 451], 00:16:25.646 | 70.00th=[ 456], 80.00th=[ 460], 90.00th=[ 468], 95.00th=[ 481], 00:16:25.646 | 99.00th=[ 642], 99.50th=[ 676], 99.90th=[ 735], 99.95th=[ 735], 00:16:25.646 | 99.99th=[ 735] 00:16:25.646 bw ( KiB/s): min=18432, max=38912, per=4.75%, avg=35426.90, stdev=4208.22, samples=20 00:16:25.646 iops : min= 72, max= 152, avg=138.35, stdev=16.45, samples=20 00:16:25.646 lat (msec) : 500=96.06%, 750=3.94% 00:16:25.646 cpu : usr=0.34%, sys=0.37%, ctx=1663, majf=0, minf=1 00:16:25.646 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:16:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.646 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.646 issued rwts: total=0,1448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.646 job6: (groupid=0, jobs=1): err= 0: pid=87415: Tue Oct 29 11:05:30 2024 00:16:25.646 write: IOPS=137, BW=34.4MiB/s (36.1MB/s)(354MiB/10281msec); 0 zone resets 00:16:25.646 slat (usec): min=17, max=106339, avg=7057.81, stdev=13175.12 00:16:25.646 clat (msec): min=84, max=744, avg=457.35, stdev=64.03 00:16:25.646 lat (msec): min=84, max=744, avg=464.40, stdev=63.93 00:16:25.646 clat percentiles (msec): 00:16:25.646 | 1.00th=[ 148], 5.00th=[ 368], 10.00th=[ 422], 20.00th=[ 439], 00:16:25.646 | 30.00th=[ 451], 40.00th=[ 456], 50.00th=[ 464], 60.00th=[ 468], 00:16:25.646 | 70.00th=[ 477], 80.00th=[ 485], 90.00th=[ 506], 95.00th=[ 527], 00:16:25.646 | 99.00th=[ 625], 99.50th=[ 693], 99.90th=[ 743], 99.95th=[ 743], 00:16:25.646 | 99.99th=[ 743] 00:16:25.646 bw ( KiB/s): min=30720, max=36864, per=4.65%, avg=34633.10, stdev=1956.04, samples=20 00:16:25.646 iops : min= 120, max= 144, avg=135.25, stdev= 7.60, samples=20 00:16:25.646 lat (msec) : 100=0.28%, 250=1.98%, 500=86.58%, 750=11.16% 00:16:25.646 cpu : usr=0.25%, sys=0.42%, ctx=692, majf=0, minf=1 00:16:25.646 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.6% 00:16:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.646 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.646 issued rwts: total=0,1416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.646 job7: (groupid=0, jobs=1): err= 0: pid=87417: Tue Oct 29 11:05:30 2024 00:16:25.646 write: IOPS=559, BW=140MiB/s (147MB/s)(1411MiB/10094msec); 0 zone resets 00:16:25.646 slat (usec): min=18, max=10688, avg=1765.53, stdev=2998.37 00:16:25.646 clat (msec): min=13, max=207, avg=112.61, stdev= 9.09 00:16:25.646 lat (msec): min=13, max=207, avg=114.37, stdev= 8.72 00:16:25.646 clat percentiles (msec): 00:16:25.646 | 1.00th=[ 100], 5.00th=[ 106], 10.00th=[ 107], 20.00th=[ 109], 00:16:25.646 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 113], 60.00th=[ 115], 00:16:25.646 | 70.00th=[ 115], 80.00th=[ 116], 90.00th=[ 118], 95.00th=[ 118], 00:16:25.646 | 99.00th=[ 122], 99.50th=[ 159], 99.90th=[ 201], 99.95th=[ 201], 00:16:25.646 | 99.99th=[ 209] 00:16:25.646 bw ( KiB/s): min=137728, max=145408, per=19.17%, avg=142884.65, stdev=2362.50, samples=20 00:16:25.646 iops : min= 538, max= 568, avg=558.10, stdev= 9.19, samples=20 00:16:25.646 lat (msec) : 20=0.14%, 50=0.28%, 100=0.67%, 250=98.90% 00:16:25.646 cpu : usr=0.93%, sys=1.76%, ctx=11317, majf=0, minf=1 00:16:25.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.646 issued rwts: total=0,5645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.646 job8: (groupid=0, jobs=1): err= 0: pid=87423: Tue Oct 29 11:05:30 2024 00:16:25.646 write: IOPS=136, BW=34.2MiB/s (35.9MB/s)(352MiB/10291msec); 0 zone resets 00:16:25.646 slat (usec): min=20, max=241078, avg=7110.36, stdev=15130.76 00:16:25.646 clat (msec): min=242, max=732, avg=460.41, stdev=68.09 00:16:25.646 lat (msec): min=242, max=733, avg=467.52, stdev=67.76 00:16:25.646 clat percentiles (msec): 00:16:25.646 | 1.00th=[ 300], 5.00th=[ 405], 10.00th=[ 414], 20.00th=[ 430], 00:16:25.646 | 30.00th=[ 439], 40.00th=[ 443], 50.00th=[ 447], 60.00th=[ 456], 00:16:25.646 | 70.00th=[ 460], 80.00th=[ 472], 90.00th=[ 527], 95.00th=[ 659], 00:16:25.646 | 99.00th=[ 709], 99.50th=[ 709], 99.90th=[ 735], 99.95th=[ 735], 00:16:25.646 | 99.99th=[ 735] 00:16:25.646 bw ( KiB/s): min=20480, max=38912, per=4.61%, avg=34402.90, stdev=4486.63, samples=20 00:16:25.646 iops : min= 80, max= 152, avg=134.35, stdev=17.52, samples=20 00:16:25.646 lat (msec) : 250=0.28%, 500=87.93%, 750=11.79% 00:16:25.646 cpu : usr=0.24%, sys=0.47%, ctx=1947, majf=0, minf=1 00:16:25.646 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:16:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.646 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.646 issued rwts: total=0,1408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.646 job9: (groupid=0, jobs=1): err= 0: pid=87424: Tue Oct 29 11:05:30 2024 00:16:25.646 write: IOPS=139, BW=34.9MiB/s (36.6MB/s)(360MiB/10310msec); 0 zone resets 00:16:25.646 slat (usec): min=22, max=228295, avg=6958.48, stdev=13879.94 00:16:25.646 clat (msec): min=8, max=757, avg=451.00, stdev=86.65 00:16:25.646 lat (msec): min=8, max=757, avg=457.95, stdev=87.22 00:16:25.646 clat percentiles (msec): 00:16:25.646 | 1.00th=[ 81], 5.00th=[ 351], 10.00th=[ 401], 20.00th=[ 426], 00:16:25.646 | 30.00th=[ 439], 40.00th=[ 447], 50.00th=[ 447], 60.00th=[ 456], 00:16:25.646 | 70.00th=[ 460], 80.00th=[ 481], 90.00th=[ 523], 95.00th=[ 625], 00:16:25.646 | 99.00th=[ 684], 99.50th=[ 709], 99.90th=[ 760], 99.95th=[ 760], 00:16:25.646 | 99.99th=[ 760] 00:16:25.646 bw ( KiB/s): min=20480, max=43008, per=4.73%, avg=35256.35, stdev=4384.03, samples=20 00:16:25.646 iops : min= 80, max= 168, avg=137.50, stdev=17.10, samples=20 00:16:25.646 lat (msec) : 10=0.28%, 50=0.28%, 100=0.56%, 250=1.94%, 500=86.04% 00:16:25.646 lat (msec) : 750=10.69%, 1000=0.21% 00:16:25.646 cpu : usr=0.32%, sys=0.38%, ctx=1125, majf=0, minf=1 00:16:25.646 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:16:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.646 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.646 issued rwts: total=0,1440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.646 job10: (groupid=0, jobs=1): err= 0: pid=87425: Tue Oct 29 11:05:30 2024 00:16:25.646 write: IOPS=430, BW=108MiB/s (113MB/s)(1089MiB/10107msec); 0 zone resets 00:16:25.646 slat (usec): min=19, max=77147, avg=2289.89, stdev=4194.71 00:16:25.646 clat (msec): min=11, max=252, avg=146.16, stdev=14.65 00:16:25.646 lat (msec): min=11, max=252, avg=148.45, stdev=14.34 00:16:25.646 clat percentiles (msec): 00:16:25.646 | 1.00th=[ 71], 5.00th=[ 136], 10.00th=[ 138], 20.00th=[ 142], 00:16:25.646 | 30.00th=[ 144], 40.00th=[ 146], 50.00th=[ 148], 60.00th=[ 148], 00:16:25.646 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 157], 00:16:25.646 | 99.00th=[ 176], 99.50th=[ 207], 99.90th=[ 245], 99.95th=[ 245], 00:16:25.646 | 99.99th=[ 253] 00:16:25.646 bw ( KiB/s): min=102195, max=112640, per=14.74%, avg=109879.90, stdev=2774.95, samples=20 00:16:25.646 iops : min= 399, max= 440, avg=429.20, stdev=10.88, samples=20 00:16:25.646 lat (msec) : 20=0.18%, 50=0.46%, 100=0.73%, 250=98.58%, 500=0.05% 00:16:25.646 cpu : usr=0.81%, sys=1.31%, ctx=1109, majf=0, minf=1 00:16:25.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:25.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:25.646 issued rwts: total=0,4356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.646 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:25.646 00:16:25.646 Run status group 0 (all jobs): 00:16:25.646 WRITE: bw=728MiB/s (763MB/s), 34.1MiB/s-140MiB/s (35.8MB/s-147MB/s), io=7506MiB (7870MB), run=10094-10310msec 00:16:25.646 00:16:25.646 Disk stats (read/write): 00:16:25.646 nvme0n1: ios=49/2906, merge=0/0, ticks=46/1237100, in_queue=1237146, util=97.89% 00:16:25.646 nvme10n1: ios=49/8516, merge=0/0, ticks=47/1212049, in_queue=1212096, util=97.83% 00:16:25.646 nvme1n1: ios=46/2906, merge=0/0, ticks=46/1236475, in_queue=1236521, util=98.10% 00:16:25.646 nvme2n1: ios=29/11160, merge=0/0, ticks=35/1214942, in_queue=1214977, util=98.06% 00:16:25.646 nvme3n1: ios=26/2784, merge=0/0, ticks=38/1236164, in_queue=1236202, util=98.16% 00:16:25.646 nvme4n1: ios=0/2868, merge=0/0, ticks=0/1237511, in_queue=1237511, util=98.22% 00:16:25.646 nvme5n1: ios=0/2811, merge=0/0, ticks=0/1236162, in_queue=1236162, util=98.40% 00:16:25.646 nvme6n1: ios=0/11160, merge=0/0, ticks=0/1214758, in_queue=1214758, util=98.41% 00:16:25.646 nvme7n1: ios=0/2789, merge=0/0, ticks=0/1236710, in_queue=1236710, util=98.70% 00:16:25.646 nvme8n1: ios=0/2849, merge=0/0, ticks=0/1236757, in_queue=1236757, util=99.00% 00:16:25.646 nvme9n1: ios=0/8576, merge=0/0, ticks=0/1212037, in_queue=1212037, util=98.87% 00:16:25.646 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:25.646 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:25.646 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.646 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK1 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK1 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:25.647 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK2 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK2 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:25.647 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK3 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK3 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:25.647 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK4 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK4 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:25.647 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK5 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK5 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:25.647 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK6 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK6 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:25.647 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK7 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK7 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:25.647 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK8 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK8 00:16:25.647 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.647 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.647 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:25.647 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.647 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.647 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:25.648 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK9 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK9 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.648 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:25.908 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK10 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK10 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:25.908 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1221 -- # local i=0 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1222 -- # grep -q -w SPDK11 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # grep -q -w SPDK11 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1233 -- # return 0 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.908 rmmod nvme_tcp 00:16:25.908 rmmod nvme_fabrics 00:16:25.908 rmmod nvme_keyring 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 86737 ']' 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 86737 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' -z 86737 ']' 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # kill -0 86737 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # uname 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86737 00:16:25.908 killing process with pid 86737 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86737' 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@971 -- # kill 86737 00:16:25.908 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@976 -- # wait 86737 00:16:26.167 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:26.168 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:26.168 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:26.168 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:26.168 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:16:26.168 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:26.168 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:26.427 00:16:26.427 real 0m48.877s 00:16:26.427 user 2m47.270s 00:16:26.427 sys 0m25.686s 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:26.427 ************************************ 00:16:26.427 END TEST nvmf_multiconnection 00:16:26.427 ************************************ 00:16:26.427 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.685 11:05:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:26.685 11:05:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:26.685 11:05:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:26.685 11:05:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.685 ************************************ 00:16:26.685 START TEST nvmf_initiator_timeout 00:16:26.685 ************************************ 00:16:26.685 11:05:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:26.685 * Looking for test storage... 00:16:26.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.685 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:26.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.685 --rc genhtml_branch_coverage=1 00:16:26.685 --rc genhtml_function_coverage=1 00:16:26.685 --rc genhtml_legend=1 00:16:26.685 --rc geninfo_all_blocks=1 00:16:26.685 --rc geninfo_unexecuted_blocks=1 00:16:26.685 00:16:26.685 ' 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:26.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.686 --rc genhtml_branch_coverage=1 00:16:26.686 --rc genhtml_function_coverage=1 00:16:26.686 --rc genhtml_legend=1 00:16:26.686 --rc geninfo_all_blocks=1 00:16:26.686 --rc geninfo_unexecuted_blocks=1 00:16:26.686 00:16:26.686 ' 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:26.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.686 --rc genhtml_branch_coverage=1 00:16:26.686 --rc genhtml_function_coverage=1 00:16:26.686 --rc genhtml_legend=1 00:16:26.686 --rc geninfo_all_blocks=1 00:16:26.686 --rc geninfo_unexecuted_blocks=1 00:16:26.686 00:16:26.686 ' 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:26.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.686 --rc genhtml_branch_coverage=1 00:16:26.686 --rc genhtml_function_coverage=1 00:16:26.686 --rc genhtml_legend=1 00:16:26.686 --rc geninfo_all_blocks=1 00:16:26.686 --rc geninfo_unexecuted_blocks=1 00:16:26.686 00:16:26.686 ' 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.686 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.686 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:26.944 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:26.945 Cannot find device "nvmf_init_br" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:26.945 Cannot find device "nvmf_init_br2" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:26.945 Cannot find device "nvmf_tgt_br" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.945 Cannot find device "nvmf_tgt_br2" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:26.945 Cannot find device "nvmf_init_br" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:26.945 Cannot find device "nvmf_init_br2" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:26.945 Cannot find device "nvmf_tgt_br" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:26.945 Cannot find device "nvmf_tgt_br2" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:26.945 Cannot find device "nvmf_br" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:26.945 Cannot find device "nvmf_init_if" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:26.945 Cannot find device "nvmf_init_if2" 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.945 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:26.945 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:27.204 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.204 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:16:27.204 00:16:27.204 --- 10.0.0.3 ping statistics --- 00:16:27.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.204 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:27.204 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:27.204 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:16:27.204 00:16:27.204 --- 10.0.0.4 ping statistics --- 00:16:27.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.204 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:27.204 00:16:27.204 --- 10.0.0.1 ping statistics --- 00:16:27.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.204 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:27.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:16:27.204 00:16:27.204 --- 10.0.0.2 ping statistics --- 00:16:27.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.204 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=87857 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 87857 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # '[' -z 87857 ']' 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:27.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:27.204 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:27.204 [2024-10-29 11:05:32.653994] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:16:27.204 [2024-10-29 11:05:32.654276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.463 [2024-10-29 11:05:32.804696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.463 [2024-10-29 11:05:32.824794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.463 [2024-10-29 11:05:32.825092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.463 [2024-10-29 11:05:32.825251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.463 [2024-10-29 11:05:32.825303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.463 [2024-10-29 11:05:32.825410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.463 [2024-10-29 11:05:32.826152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.463 [2024-10-29 11:05:32.826279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.463 [2024-10-29 11:05:32.826903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.463 [2024-10-29 11:05:32.826967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.463 [2024-10-29 11:05:32.857048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:27.463 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:27.463 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@866 -- # return 0 00:16:27.463 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:27.463 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:27.463 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:27.463 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.463 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:27.463 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:27.463 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.463 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:27.723 Malloc0 00:16:27.723 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.723 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:27.723 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.723 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:27.723 Delay0 00:16:27.723 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.723 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:27.723 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.723 11:05:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:27.723 [2024-10-29 11:05:32.993030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:27.723 [2024-10-29 11:05:33.021191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # local i=0 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:16:27.723 11:05:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # sleep 2 00:16:30.260 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:30.260 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:30.260 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:30.260 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:16:30.260 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.260 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # return 0 00:16:30.260 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=87915 00:16:30.260 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:30.260 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:30.260 [global] 00:16:30.260 thread=1 00:16:30.260 invalidate=1 00:16:30.260 rw=write 00:16:30.260 time_based=1 00:16:30.260 runtime=60 00:16:30.260 ioengine=libaio 00:16:30.260 direct=1 00:16:30.260 bs=4096 00:16:30.260 iodepth=1 00:16:30.260 norandommap=0 00:16:30.260 numjobs=1 00:16:30.260 00:16:30.260 verify_dump=1 00:16:30.260 verify_backlog=512 00:16:30.260 verify_state_save=0 00:16:30.260 do_verify=1 00:16:30.260 verify=crc32c-intel 00:16:30.260 [job0] 00:16:30.260 filename=/dev/nvme0n1 00:16:30.260 Could not set queue depth (nvme0n1) 00:16:30.260 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:30.260 fio-3.35 00:16:30.260 Starting 1 thread 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:32.792 true 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:32.792 true 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:32.792 true 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:32.792 true 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.792 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:36.080 true 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:36.080 true 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:36.080 true 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:36.080 true 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:36.080 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 87915 00:17:32.329 00:17:32.329 job0: (groupid=0, jobs=1): err= 0: pid=87936: Tue Oct 29 11:06:35 2024 00:17:32.329 read: IOPS=827, BW=3311KiB/s (3390kB/s)(194MiB/60000msec) 00:17:32.329 slat (usec): min=9, max=12708, avg=13.08, stdev=73.43 00:17:32.329 clat (usec): min=152, max=40620k, avg=1017.02, stdev=182270.61 00:17:32.329 lat (usec): min=163, max=40620k, avg=1030.10, stdev=182270.64 00:17:32.329 clat percentiles (usec): 00:17:32.329 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:17:32.329 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:17:32.329 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 241], 00:17:32.329 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 486], 99.95th=[ 586], 00:17:32.329 | 99.99th=[ 914] 00:17:32.329 write: IOPS=828, BW=3314KiB/s (3394kB/s)(194MiB/60000msec); 0 zone resets 00:17:32.329 slat (usec): min=12, max=550, avg=19.46, stdev= 6.91 00:17:32.329 clat (usec): min=117, max=1647, avg=155.53, stdev=27.30 00:17:32.329 lat (usec): min=132, max=1673, avg=174.99, stdev=28.84 00:17:32.329 clat percentiles (usec): 00:17:32.329 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 139], 00:17:32.329 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 157], 00:17:32.329 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 192], 00:17:32.329 | 99.00th=[ 215], 99.50th=[ 239], 99.90th=[ 408], 99.95th=[ 562], 00:17:32.329 | 99.99th=[ 865] 00:17:32.329 bw ( KiB/s): min= 2032, max=12288, per=100.00%, avg=9977.44, stdev=2001.07, samples=39 00:17:32.329 iops : min= 508, max= 3072, avg=2494.36, stdev=500.27, samples=39 00:17:32.329 lat (usec) : 250=98.36%, 500=1.56%, 750=0.06%, 1000=0.02% 00:17:32.329 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:17:32.329 cpu : usr=0.60%, sys=2.03%, ctx=99386, majf=0, minf=5 00:17:32.329 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:32.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:32.329 issued rwts: total=49664,49714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:32.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:32.329 00:17:32.329 Run status group 0 (all jobs): 00:17:32.329 READ: bw=3311KiB/s (3390kB/s), 3311KiB/s-3311KiB/s (3390kB/s-3390kB/s), io=194MiB (203MB), run=60000-60000msec 00:17:32.329 WRITE: bw=3314KiB/s (3394kB/s), 3314KiB/s-3314KiB/s (3394kB/s-3394kB/s), io=194MiB (204MB), run=60000-60000msec 00:17:32.329 00:17:32.329 Disk stats (read/write): 00:17:32.329 nvme0n1: ios=49574/49664, merge=0/0, ticks=10483/8455, in_queue=18938, util=99.54% 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:32.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1221 -- # local i=0 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:32.329 nvmf hotplug test: fio successful as expected 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1233 -- # return 0 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:32.329 rmmod nvme_tcp 00:17:32.329 rmmod nvme_fabrics 00:17:32.329 rmmod nvme_keyring 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 87857 ']' 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 87857 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' -z 87857 ']' 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # kill -0 87857 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # uname 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87857 00:17:32.329 killing process with pid 87857 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87857' 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # kill 87857 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@976 -- # wait 87857 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:17:32.329 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.330 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.330 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:32.330 ************************************ 00:17:32.330 END TEST nvmf_initiator_timeout 00:17:32.330 ************************************ 00:17:32.330 00:17:32.330 real 1m4.072s 00:17:32.330 user 3m49.400s 00:17:32.330 sys 0m22.449s 00:17:32.330 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:32.330 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:32.330 11:06:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:32.330 11:06:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:32.330 ************************************ 00:17:32.330 END TEST nvmf_target_extra 00:17:32.330 ************************************ 00:17:32.330 00:17:32.330 real 6m49.932s 00:17:32.330 user 17m3.029s 00:17:32.330 sys 1m54.413s 00:17:32.330 11:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:32.330 11:06:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.330 11:06:36 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:32.330 11:06:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:32.330 11:06:36 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:32.330 11:06:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:32.330 ************************************ 00:17:32.330 START TEST nvmf_host 00:17:32.330 ************************************ 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:32.330 * Looking for test storage... 00:17:32.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:32.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.330 --rc genhtml_branch_coverage=1 00:17:32.330 --rc genhtml_function_coverage=1 00:17:32.330 --rc genhtml_legend=1 00:17:32.330 --rc geninfo_all_blocks=1 00:17:32.330 --rc geninfo_unexecuted_blocks=1 00:17:32.330 00:17:32.330 ' 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:32.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.330 --rc genhtml_branch_coverage=1 00:17:32.330 --rc genhtml_function_coverage=1 00:17:32.330 --rc genhtml_legend=1 00:17:32.330 --rc geninfo_all_blocks=1 00:17:32.330 --rc geninfo_unexecuted_blocks=1 00:17:32.330 00:17:32.330 ' 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:32.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.330 --rc genhtml_branch_coverage=1 00:17:32.330 --rc genhtml_function_coverage=1 00:17:32.330 --rc genhtml_legend=1 00:17:32.330 --rc geninfo_all_blocks=1 00:17:32.330 --rc geninfo_unexecuted_blocks=1 00:17:32.330 00:17:32.330 ' 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:32.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.330 --rc genhtml_branch_coverage=1 00:17:32.330 --rc genhtml_function_coverage=1 00:17:32.330 --rc genhtml_legend=1 00:17:32.330 --rc geninfo_all_blocks=1 00:17:32.330 --rc geninfo_unexecuted_blocks=1 00:17:32.330 00:17:32.330 ' 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.330 11:06:36 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.331 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.331 ************************************ 00:17:32.331 START TEST nvmf_identify 00:17:32.331 ************************************ 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:32.331 * Looking for test storage... 00:17:32.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.331 --rc genhtml_branch_coverage=1 00:17:32.331 --rc genhtml_function_coverage=1 00:17:32.331 --rc genhtml_legend=1 00:17:32.331 --rc geninfo_all_blocks=1 00:17:32.331 --rc geninfo_unexecuted_blocks=1 00:17:32.331 00:17:32.331 ' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.331 --rc genhtml_branch_coverage=1 00:17:32.331 --rc genhtml_function_coverage=1 00:17:32.331 --rc genhtml_legend=1 00:17:32.331 --rc geninfo_all_blocks=1 00:17:32.331 --rc geninfo_unexecuted_blocks=1 00:17:32.331 00:17:32.331 ' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.331 --rc genhtml_branch_coverage=1 00:17:32.331 --rc genhtml_function_coverage=1 00:17:32.331 --rc genhtml_legend=1 00:17:32.331 --rc geninfo_all_blocks=1 00:17:32.331 --rc geninfo_unexecuted_blocks=1 00:17:32.331 00:17:32.331 ' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:32.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.331 --rc genhtml_branch_coverage=1 00:17:32.331 --rc genhtml_function_coverage=1 00:17:32.331 --rc genhtml_legend=1 00:17:32.331 --rc geninfo_all_blocks=1 00:17:32.331 --rc geninfo_unexecuted_blocks=1 00:17:32.331 00:17:32.331 ' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.331 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.332 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:32.332 Cannot find device "nvmf_init_br" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:32.332 Cannot find device "nvmf_init_br2" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:32.332 Cannot find device "nvmf_tgt_br" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:32.332 Cannot find device "nvmf_tgt_br2" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:32.332 Cannot find device "nvmf_init_br" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:32.332 Cannot find device "nvmf_init_br2" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:32.332 Cannot find device "nvmf_tgt_br" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:32.332 Cannot find device "nvmf_tgt_br2" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:32.332 Cannot find device "nvmf_br" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:32.332 Cannot find device "nvmf_init_if" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:32.332 Cannot find device "nvmf_init_if2" 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:32.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:32.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:17:32.332 00:17:32.332 --- 10.0.0.3 ping statistics --- 00:17:32.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.332 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:32.332 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:32.332 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:32.332 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:17:32.332 00:17:32.332 --- 10.0.0.4 ping statistics --- 00:17:32.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.333 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:32.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:32.333 00:17:32.333 --- 10.0.0.1 ping statistics --- 00:17:32.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.333 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:32.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:17:32.333 00:17:32.333 --- 10.0.0.2 ping statistics --- 00:17:32.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.333 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88860 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88860 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 88860 ']' 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:32.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:32.333 11:06:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.333 [2024-10-29 11:06:36.971099] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:17:32.333 [2024-10-29 11:06:36.971198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.333 [2024-10-29 11:06:37.125504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.333 [2024-10-29 11:06:37.150655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.333 [2024-10-29 11:06:37.150957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.333 [2024-10-29 11:06:37.151123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.333 [2024-10-29 11:06:37.151274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.333 [2024-10-29 11:06:37.151315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.333 [2024-10-29 11:06:37.152401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.333 [2024-10-29 11:06:37.152474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.333 [2024-10-29 11:06:37.152543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:32.333 [2024-10-29 11:06:37.152545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.333 [2024-10-29 11:06:37.187091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.333 [2024-10-29 11:06:37.243037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.333 Malloc0 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.333 [2024-10-29 11:06:37.339253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.333 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.333 [ 00:17:32.333 { 00:17:32.333 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:32.333 "subtype": "Discovery", 00:17:32.333 "listen_addresses": [ 00:17:32.333 { 00:17:32.333 "trtype": "TCP", 00:17:32.333 "adrfam": "IPv4", 00:17:32.333 "traddr": "10.0.0.3", 00:17:32.333 "trsvcid": "4420" 00:17:32.333 } 00:17:32.333 ], 00:17:32.333 "allow_any_host": true, 00:17:32.333 "hosts": [] 00:17:32.333 }, 00:17:32.333 { 00:17:32.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.333 "subtype": "NVMe", 00:17:32.333 "listen_addresses": [ 00:17:32.333 { 00:17:32.333 "trtype": "TCP", 00:17:32.333 "adrfam": "IPv4", 00:17:32.333 "traddr": "10.0.0.3", 00:17:32.333 "trsvcid": "4420" 00:17:32.333 } 00:17:32.333 ], 00:17:32.333 "allow_any_host": true, 00:17:32.333 "hosts": [], 00:17:32.333 "serial_number": "SPDK00000000000001", 00:17:32.333 "model_number": "SPDK bdev Controller", 00:17:32.333 "max_namespaces": 32, 00:17:32.333 "min_cntlid": 1, 00:17:32.333 "max_cntlid": 65519, 00:17:32.333 "namespaces": [ 00:17:32.333 { 00:17:32.333 "nsid": 1, 00:17:32.334 "bdev_name": "Malloc0", 00:17:32.334 "name": "Malloc0", 00:17:32.334 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:32.334 "eui64": "ABCDEF0123456789", 00:17:32.334 "uuid": "7f561351-2e29-4ff4-9723-a287aa08c611" 00:17:32.334 } 00:17:32.334 ] 00:17:32.334 } 00:17:32.334 ] 00:17:32.334 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.334 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:32.334 [2024-10-29 11:06:37.390805] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:17:32.334 [2024-10-29 11:06:37.390871] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88888 ] 00:17:32.334 [2024-10-29 11:06:37.553189] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:32.334 [2024-10-29 11:06:37.553273] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:32.334 [2024-10-29 11:06:37.553281] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:32.334 [2024-10-29 11:06:37.553293] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:32.334 [2024-10-29 11:06:37.553304] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:32.334 [2024-10-29 11:06:37.557739] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:32.334 [2024-10-29 11:06:37.557828] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x151eb00 0 00:17:32.334 [2024-10-29 11:06:37.565453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:32.334 [2024-10-29 11:06:37.565478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:32.334 [2024-10-29 11:06:37.565500] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:32.334 [2024-10-29 11:06:37.565504] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:32.334 [2024-10-29 11:06:37.565535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.565548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.565553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x151eb00) 00:17:32.334 [2024-10-29 11:06:37.565567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:32.334 [2024-10-29 11:06:37.565597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1564fc0, cid 0, qid 0 00:17:32.334 [2024-10-29 11:06:37.573429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.334 [2024-10-29 11:06:37.573451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.334 [2024-10-29 11:06:37.573473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573478] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1564fc0) on tqpair=0x151eb00 00:17:32.334 [2024-10-29 11:06:37.573494] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:32.334 [2024-10-29 11:06:37.573502] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:32.334 [2024-10-29 11:06:37.573508] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:32.334 [2024-10-29 11:06:37.573526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x151eb00) 00:17:32.334 [2024-10-29 11:06:37.573545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.334 [2024-10-29 11:06:37.573572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1564fc0, cid 0, qid 0 00:17:32.334 [2024-10-29 11:06:37.573634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.334 [2024-10-29 11:06:37.573641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.334 [2024-10-29 11:06:37.573644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1564fc0) on tqpair=0x151eb00 00:17:32.334 [2024-10-29 11:06:37.573670] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:32.334 [2024-10-29 11:06:37.573694] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:32.334 [2024-10-29 11:06:37.573703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x151eb00) 00:17:32.334 [2024-10-29 11:06:37.573720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.334 [2024-10-29 11:06:37.573740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1564fc0, cid 0, qid 0 00:17:32.334 [2024-10-29 11:06:37.573785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.334 [2024-10-29 11:06:37.573792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.334 [2024-10-29 11:06:37.573796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1564fc0) on tqpair=0x151eb00 00:17:32.334 [2024-10-29 11:06:37.573807] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:32.334 [2024-10-29 11:06:37.573816] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:32.334 [2024-10-29 11:06:37.573824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x151eb00) 00:17:32.334 [2024-10-29 11:06:37.573840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.334 [2024-10-29 11:06:37.573870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1564fc0, cid 0, qid 0 00:17:32.334 [2024-10-29 11:06:37.573912] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.334 [2024-10-29 11:06:37.573919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.334 [2024-10-29 11:06:37.573923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1564fc0) on tqpair=0x151eb00 00:17:32.334 [2024-10-29 11:06:37.573933] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:32.334 [2024-10-29 11:06:37.573949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.573959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x151eb00) 00:17:32.334 [2024-10-29 11:06:37.573967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.334 [2024-10-29 11:06:37.573987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1564fc0, cid 0, qid 0 00:17:32.334 [2024-10-29 11:06:37.574031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.334 [2024-10-29 11:06:37.574038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.334 [2024-10-29 11:06:37.574042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.574046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1564fc0) on tqpair=0x151eb00 00:17:32.334 [2024-10-29 11:06:37.574052] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:32.334 [2024-10-29 11:06:37.574057] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:32.334 [2024-10-29 11:06:37.574066] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:32.334 [2024-10-29 11:06:37.574172] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:32.334 [2024-10-29 11:06:37.574187] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:32.334 [2024-10-29 11:06:37.574198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.574203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.574207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x151eb00) 00:17:32.334 [2024-10-29 11:06:37.574215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.334 [2024-10-29 11:06:37.574236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1564fc0, cid 0, qid 0 00:17:32.334 [2024-10-29 11:06:37.574284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.334 [2024-10-29 11:06:37.574291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.334 [2024-10-29 11:06:37.574295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.574300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1564fc0) on tqpair=0x151eb00 00:17:32.334 [2024-10-29 11:06:37.574305] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:32.334 [2024-10-29 11:06:37.574316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.574321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.574325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x151eb00) 00:17:32.334 [2024-10-29 11:06:37.574333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.334 [2024-10-29 11:06:37.574351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1564fc0, cid 0, qid 0 00:17:32.334 [2024-10-29 11:06:37.574413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.334 [2024-10-29 11:06:37.574422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.334 [2024-10-29 11:06:37.574426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.574431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1564fc0) on tqpair=0x151eb00 00:17:32.334 [2024-10-29 11:06:37.574436] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:32.334 [2024-10-29 11:06:37.574442] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:32.334 [2024-10-29 11:06:37.574451] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:32.334 [2024-10-29 11:06:37.574467] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:32.334 [2024-10-29 11:06:37.574478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.334 [2024-10-29 11:06:37.574482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x151eb00) 00:17:32.335 [2024-10-29 11:06:37.574491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.335 [2024-10-29 11:06:37.574514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1564fc0, cid 0, qid 0 00:17:32.335 [2024-10-29 11:06:37.574603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.335 [2024-10-29 11:06:37.574611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.335 [2024-10-29 11:06:37.574615] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574619] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x151eb00): datao=0, datal=4096, cccid=0 00:17:32.335 [2024-10-29 11:06:37.574625] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1564fc0) on tqpair(0x151eb00): expected_datao=0, payload_size=4096 00:17:32.335 [2024-10-29 11:06:37.574630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574639] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574645] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.335 [2024-10-29 11:06:37.574661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.335 [2024-10-29 11:06:37.574665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1564fc0) on tqpair=0x151eb00 00:17:32.335 [2024-10-29 11:06:37.574680] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:32.335 [2024-10-29 11:06:37.574686] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:32.335 [2024-10-29 11:06:37.574691] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:32.335 [2024-10-29 11:06:37.574696] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:32.335 [2024-10-29 11:06:37.574702] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:32.335 [2024-10-29 11:06:37.574707] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:32.335 [2024-10-29 11:06:37.574717] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:32.335 [2024-10-29 11:06:37.574725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x151eb00) 00:17:32.335 [2024-10-29 11:06:37.574743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.335 [2024-10-29 11:06:37.574764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1564fc0, cid 0, qid 0 00:17:32.335 [2024-10-29 11:06:37.574830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.335 [2024-10-29 11:06:37.574837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.335 [2024-10-29 11:06:37.574841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1564fc0) on tqpair=0x151eb00 00:17:32.335 [2024-10-29 11:06:37.574865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x151eb00) 00:17:32.335 [2024-10-29 11:06:37.574884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.335 [2024-10-29 11:06:37.574890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x151eb00) 00:17:32.335 [2024-10-29 11:06:37.574905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.335 [2024-10-29 11:06:37.574911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x151eb00) 00:17:32.335 [2024-10-29 11:06:37.574925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.335 [2024-10-29 11:06:37.574931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.335 [2024-10-29 11:06:37.574945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.335 [2024-10-29 11:06:37.574951] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:32.335 [2024-10-29 11:06:37.574961] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:32.335 [2024-10-29 11:06:37.574968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.574972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x151eb00) 00:17:32.335 [2024-10-29 11:06:37.574980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.335 [2024-10-29 11:06:37.575002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1564fc0, cid 0, qid 0 00:17:32.335 [2024-10-29 11:06:37.575010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565140, cid 1, qid 0 00:17:32.335 [2024-10-29 11:06:37.575015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15652c0, cid 2, qid 0 00:17:32.335 [2024-10-29 11:06:37.575020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.335 [2024-10-29 11:06:37.575025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15655c0, cid 4, qid 0 00:17:32.335 [2024-10-29 11:06:37.575119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.335 [2024-10-29 11:06:37.575126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.335 [2024-10-29 11:06:37.575130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15655c0) on tqpair=0x151eb00 00:17:32.335 [2024-10-29 11:06:37.575144] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:32.335 [2024-10-29 11:06:37.575151] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:32.335 [2024-10-29 11:06:37.575163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x151eb00) 00:17:32.335 [2024-10-29 11:06:37.575175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.335 [2024-10-29 11:06:37.575195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15655c0, cid 4, qid 0 00:17:32.335 [2024-10-29 11:06:37.575251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.335 [2024-10-29 11:06:37.575258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.335 [2024-10-29 11:06:37.575262] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575266] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x151eb00): datao=0, datal=4096, cccid=4 00:17:32.335 [2024-10-29 11:06:37.575271] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15655c0) on tqpair(0x151eb00): expected_datao=0, payload_size=4096 00:17:32.335 [2024-10-29 11:06:37.575276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575284] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575288] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.335 [2024-10-29 11:06:37.575303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.335 [2024-10-29 11:06:37.575307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15655c0) on tqpair=0x151eb00 00:17:32.335 [2024-10-29 11:06:37.575326] nvme_ctrlr.c:4166:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:32.335 [2024-10-29 11:06:37.575401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x151eb00) 00:17:32.335 [2024-10-29 11:06:37.575422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.335 [2024-10-29 11:06:37.575432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x151eb00) 00:17:32.335 [2024-10-29 11:06:37.575447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.335 [2024-10-29 11:06:37.575478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15655c0, cid 4, qid 0 00:17:32.335 [2024-10-29 11:06:37.575487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565740, cid 5, qid 0 00:17:32.335 [2024-10-29 11:06:37.575592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.335 [2024-10-29 11:06:37.575606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.335 [2024-10-29 11:06:37.575610] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575615] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x151eb00): datao=0, datal=1024, cccid=4 00:17:32.335 [2024-10-29 11:06:37.575620] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15655c0) on tqpair(0x151eb00): expected_datao=0, payload_size=1024 00:17:32.335 [2024-10-29 11:06:37.575625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575633] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575638] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.335 [2024-10-29 11:06:37.575650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.335 [2024-10-29 11:06:37.575654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565740) on tqpair=0x151eb00 00:17:32.335 [2024-10-29 11:06:37.575678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.335 [2024-10-29 11:06:37.575687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.335 [2024-10-29 11:06:37.575691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15655c0) on tqpair=0x151eb00 00:17:32.335 [2024-10-29 11:06:37.575708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.335 [2024-10-29 11:06:37.575713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x151eb00) 00:17:32.336 [2024-10-29 11:06:37.575721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.336 [2024-10-29 11:06:37.575748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15655c0, cid 4, qid 0 00:17:32.336 [2024-10-29 11:06:37.575832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.336 [2024-10-29 11:06:37.575843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.336 [2024-10-29 11:06:37.575848] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.575852] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x151eb00): datao=0, datal=3072, cccid=4 00:17:32.336 [2024-10-29 11:06:37.575857] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15655c0) on tqpair(0x151eb00): expected_datao=0, payload_size=3072 00:17:32.336 [2024-10-29 11:06:37.575862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.575869] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.575874] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.575882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.336 [2024-10-29 11:06:37.575889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.336 [2024-10-29 11:06:37.575893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.575897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15655c0) on tqpair=0x151eb00 00:17:32.336 [2024-10-29 11:06:37.575907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.575912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x151eb00) 00:17:32.336 [2024-10-29 11:06:37.575919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.336 [2024-10-29 11:06:37.575944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15655c0, cid 4, qid 0 00:17:32.336 [2024-10-29 11:06:37.576008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.336 [2024-10-29 11:06:37.576015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.336 [2024-10-29 11:06:37.576019] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.576023] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x151eb00): datao=0, datal=8, cccid=4 00:17:32.336 [2024-10-29 11:06:37.576028] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15655c0) on tqpair(0x151eb00): expected_datao=0, payload_size=8 00:17:32.336 [2024-10-29 11:06:37.576032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.576039] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.576044] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.576059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.336 [2024-10-29 11:06:37.576066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.336 [2024-10-29 11:06:37.576070] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.336 [2024-10-29 11:06:37.576074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15655c0) on tqpair=0x151eb00 00:17:32.336 ===================================================== 00:17:32.336 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:32.336 ===================================================== 00:17:32.336 Controller Capabilities/Features 00:17:32.336 ================================ 00:17:32.336 Vendor ID: 0000 00:17:32.336 Subsystem Vendor ID: 0000 00:17:32.336 Serial Number: .................... 00:17:32.336 Model Number: ........................................ 00:17:32.336 Firmware Version: 25.01 00:17:32.336 Recommended Arb Burst: 0 00:17:32.336 IEEE OUI Identifier: 00 00 00 00:17:32.336 Multi-path I/O 00:17:32.336 May have multiple subsystem ports: No 00:17:32.336 May have multiple controllers: No 00:17:32.336 Associated with SR-IOV VF: No 00:17:32.336 Max Data Transfer Size: 131072 00:17:32.336 Max Number of Namespaces: 0 00:17:32.336 Max Number of I/O Queues: 1024 00:17:32.336 NVMe Specification Version (VS): 1.3 00:17:32.336 NVMe Specification Version (Identify): 1.3 00:17:32.336 Maximum Queue Entries: 128 00:17:32.336 Contiguous Queues Required: Yes 00:17:32.336 Arbitration Mechanisms Supported 00:17:32.336 Weighted Round Robin: Not Supported 00:17:32.336 Vendor Specific: Not Supported 00:17:32.336 Reset Timeout: 15000 ms 00:17:32.336 Doorbell Stride: 4 bytes 00:17:32.336 NVM Subsystem Reset: Not Supported 00:17:32.336 Command Sets Supported 00:17:32.336 NVM Command Set: Supported 00:17:32.336 Boot Partition: Not Supported 00:17:32.336 Memory Page Size Minimum: 4096 bytes 00:17:32.336 Memory Page Size Maximum: 4096 bytes 00:17:32.336 Persistent Memory Region: Not Supported 00:17:32.336 Optional Asynchronous Events Supported 00:17:32.336 Namespace Attribute Notices: Not Supported 00:17:32.336 Firmware Activation Notices: Not Supported 00:17:32.336 ANA Change Notices: Not Supported 00:17:32.336 PLE Aggregate Log Change Notices: Not Supported 00:17:32.336 LBA Status Info Alert Notices: Not Supported 00:17:32.336 EGE Aggregate Log Change Notices: Not Supported 00:17:32.336 Normal NVM Subsystem Shutdown event: Not Supported 00:17:32.336 Zone Descriptor Change Notices: Not Supported 00:17:32.336 Discovery Log Change Notices: Supported 00:17:32.336 Controller Attributes 00:17:32.336 128-bit Host Identifier: Not Supported 00:17:32.336 Non-Operational Permissive Mode: Not Supported 00:17:32.336 NVM Sets: Not Supported 00:17:32.336 Read Recovery Levels: Not Supported 00:17:32.336 Endurance Groups: Not Supported 00:17:32.336 Predictable Latency Mode: Not Supported 00:17:32.336 Traffic Based Keep ALive: Not Supported 00:17:32.336 Namespace Granularity: Not Supported 00:17:32.336 SQ Associations: Not Supported 00:17:32.336 UUID List: Not Supported 00:17:32.336 Multi-Domain Subsystem: Not Supported 00:17:32.336 Fixed Capacity Management: Not Supported 00:17:32.336 Variable Capacity Management: Not Supported 00:17:32.336 Delete Endurance Group: Not Supported 00:17:32.336 Delete NVM Set: Not Supported 00:17:32.336 Extended LBA Formats Supported: Not Supported 00:17:32.336 Flexible Data Placement Supported: Not Supported 00:17:32.336 00:17:32.336 Controller Memory Buffer Support 00:17:32.336 ================================ 00:17:32.336 Supported: No 00:17:32.336 00:17:32.336 Persistent Memory Region Support 00:17:32.336 ================================ 00:17:32.336 Supported: No 00:17:32.336 00:17:32.336 Admin Command Set Attributes 00:17:32.336 ============================ 00:17:32.336 Security Send/Receive: Not Supported 00:17:32.336 Format NVM: Not Supported 00:17:32.336 Firmware Activate/Download: Not Supported 00:17:32.336 Namespace Management: Not Supported 00:17:32.336 Device Self-Test: Not Supported 00:17:32.336 Directives: Not Supported 00:17:32.336 NVMe-MI: Not Supported 00:17:32.336 Virtualization Management: Not Supported 00:17:32.336 Doorbell Buffer Config: Not Supported 00:17:32.336 Get LBA Status Capability: Not Supported 00:17:32.336 Command & Feature Lockdown Capability: Not Supported 00:17:32.336 Abort Command Limit: 1 00:17:32.336 Async Event Request Limit: 4 00:17:32.336 Number of Firmware Slots: N/A 00:17:32.336 Firmware Slot 1 Read-Only: N/A 00:17:32.336 Firmware Activation Without Reset: N/A 00:17:32.336 Multiple Update Detection Support: N/A 00:17:32.336 Firmware Update Granularity: No Information Provided 00:17:32.336 Per-Namespace SMART Log: No 00:17:32.336 Asymmetric Namespace Access Log Page: Not Supported 00:17:32.336 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:32.336 Command Effects Log Page: Not Supported 00:17:32.336 Get Log Page Extended Data: Supported 00:17:32.336 Telemetry Log Pages: Not Supported 00:17:32.336 Persistent Event Log Pages: Not Supported 00:17:32.336 Supported Log Pages Log Page: May Support 00:17:32.336 Commands Supported & Effects Log Page: Not Supported 00:17:32.336 Feature Identifiers & Effects Log Page:May Support 00:17:32.336 NVMe-MI Commands & Effects Log Page: May Support 00:17:32.336 Data Area 4 for Telemetry Log: Not Supported 00:17:32.336 Error Log Page Entries Supported: 128 00:17:32.336 Keep Alive: Not Supported 00:17:32.336 00:17:32.336 NVM Command Set Attributes 00:17:32.336 ========================== 00:17:32.336 Submission Queue Entry Size 00:17:32.336 Max: 1 00:17:32.336 Min: 1 00:17:32.336 Completion Queue Entry Size 00:17:32.336 Max: 1 00:17:32.336 Min: 1 00:17:32.336 Number of Namespaces: 0 00:17:32.336 Compare Command: Not Supported 00:17:32.336 Write Uncorrectable Command: Not Supported 00:17:32.336 Dataset Management Command: Not Supported 00:17:32.336 Write Zeroes Command: Not Supported 00:17:32.336 Set Features Save Field: Not Supported 00:17:32.336 Reservations: Not Supported 00:17:32.336 Timestamp: Not Supported 00:17:32.336 Copy: Not Supported 00:17:32.336 Volatile Write Cache: Not Present 00:17:32.336 Atomic Write Unit (Normal): 1 00:17:32.336 Atomic Write Unit (PFail): 1 00:17:32.336 Atomic Compare & Write Unit: 1 00:17:32.336 Fused Compare & Write: Supported 00:17:32.336 Scatter-Gather List 00:17:32.336 SGL Command Set: Supported 00:17:32.336 SGL Keyed: Supported 00:17:32.336 SGL Bit Bucket Descriptor: Not Supported 00:17:32.336 SGL Metadata Pointer: Not Supported 00:17:32.336 Oversized SGL: Not Supported 00:17:32.336 SGL Metadata Address: Not Supported 00:17:32.336 SGL Offset: Supported 00:17:32.336 Transport SGL Data Block: Not Supported 00:17:32.336 Replay Protected Memory Block: Not Supported 00:17:32.336 00:17:32.336 Firmware Slot Information 00:17:32.336 ========================= 00:17:32.336 Active slot: 0 00:17:32.336 00:17:32.336 00:17:32.336 Error Log 00:17:32.336 ========= 00:17:32.336 00:17:32.336 Active Namespaces 00:17:32.336 ================= 00:17:32.336 Discovery Log Page 00:17:32.336 ================== 00:17:32.336 Generation Counter: 2 00:17:32.336 Number of Records: 2 00:17:32.336 Record Format: 0 00:17:32.336 00:17:32.337 Discovery Log Entry 0 00:17:32.337 ---------------------- 00:17:32.337 Transport Type: 3 (TCP) 00:17:32.337 Address Family: 1 (IPv4) 00:17:32.337 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:32.337 Entry Flags: 00:17:32.337 Duplicate Returned Information: 1 00:17:32.337 Explicit Persistent Connection Support for Discovery: 1 00:17:32.337 Transport Requirements: 00:17:32.337 Secure Channel: Not Required 00:17:32.337 Port ID: 0 (0x0000) 00:17:32.337 Controller ID: 65535 (0xffff) 00:17:32.337 Admin Max SQ Size: 128 00:17:32.337 Transport Service Identifier: 4420 00:17:32.337 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:32.337 Transport Address: 10.0.0.3 00:17:32.337 Discovery Log Entry 1 00:17:32.337 ---------------------- 00:17:32.337 Transport Type: 3 (TCP) 00:17:32.337 Address Family: 1 (IPv4) 00:17:32.337 Subsystem Type: 2 (NVM Subsystem) 00:17:32.337 Entry Flags: 00:17:32.337 Duplicate Returned Information: 0 00:17:32.337 Explicit Persistent Connection Support for Discovery: 0 00:17:32.337 Transport Requirements: 00:17:32.337 Secure Channel: Not Required 00:17:32.337 Port ID: 0 (0x0000) 00:17:32.337 Controller ID: 65535 (0xffff) 00:17:32.337 Admin Max SQ Size: 128 00:17:32.337 Transport Service Identifier: 4420 00:17:32.337 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:32.337 Transport Address: 10.0.0.3 [2024-10-29 11:06:37.576172] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:32.337 [2024-10-29 11:06:37.576189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1564fc0) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.576196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.337 [2024-10-29 11:06:37.576202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565140) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.576207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.337 [2024-10-29 11:06:37.576213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15652c0) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.576218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.337 [2024-10-29 11:06:37.576223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.576228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.337 [2024-10-29 11:06:37.576238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.337 [2024-10-29 11:06:37.576255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.337 [2024-10-29 11:06:37.576279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.337 [2024-10-29 11:06:37.576323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.337 [2024-10-29 11:06:37.576331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.337 [2024-10-29 11:06:37.576335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.576379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.337 [2024-10-29 11:06:37.576410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.337 [2024-10-29 11:06:37.576438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.337 [2024-10-29 11:06:37.576506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.337 [2024-10-29 11:06:37.576513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.337 [2024-10-29 11:06:37.576517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.576528] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:32.337 [2024-10-29 11:06:37.576533] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:32.337 [2024-10-29 11:06:37.576544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.337 [2024-10-29 11:06:37.576562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.337 [2024-10-29 11:06:37.576582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.337 [2024-10-29 11:06:37.576633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.337 [2024-10-29 11:06:37.576640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.337 [2024-10-29 11:06:37.576644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.576660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.337 [2024-10-29 11:06:37.576678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.337 [2024-10-29 11:06:37.576697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.337 [2024-10-29 11:06:37.576759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.337 [2024-10-29 11:06:37.576766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.337 [2024-10-29 11:06:37.576770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.576785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.337 [2024-10-29 11:06:37.576802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.337 [2024-10-29 11:06:37.576820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.337 [2024-10-29 11:06:37.576863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.337 [2024-10-29 11:06:37.576870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.337 [2024-10-29 11:06:37.576874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.576889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.337 [2024-10-29 11:06:37.576906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.337 [2024-10-29 11:06:37.576924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.337 [2024-10-29 11:06:37.576966] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.337 [2024-10-29 11:06:37.576973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.337 [2024-10-29 11:06:37.576977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.576992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.576998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.577002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.337 [2024-10-29 11:06:37.577009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.337 [2024-10-29 11:06:37.577028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.337 [2024-10-29 11:06:37.577075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.337 [2024-10-29 11:06:37.577092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.337 [2024-10-29 11:06:37.577096] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.577100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.577111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.577117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.577121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.337 [2024-10-29 11:06:37.577129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.337 [2024-10-29 11:06:37.577147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.337 [2024-10-29 11:06:37.577194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.337 [2024-10-29 11:06:37.577201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.337 [2024-10-29 11:06:37.577205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.577209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.337 [2024-10-29 11:06:37.577220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.577225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.337 [2024-10-29 11:06:37.577229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.337 [2024-10-29 11:06:37.577237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.337 [2024-10-29 11:06:37.577256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.337 [2024-10-29 11:06:37.577304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.337 [2024-10-29 11:06:37.577311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.337 [2024-10-29 11:06:37.577314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.577319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.338 [2024-10-29 11:06:37.577329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.577335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.577339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.338 [2024-10-29 11:06:37.577346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.338 [2024-10-29 11:06:37.577365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.338 [2024-10-29 11:06:37.581433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.338 [2024-10-29 11:06:37.581450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.338 [2024-10-29 11:06:37.581455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.581460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.338 [2024-10-29 11:06:37.581474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.581479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.581483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x151eb00) 00:17:32.338 [2024-10-29 11:06:37.581492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.338 [2024-10-29 11:06:37.581517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1565440, cid 3, qid 0 00:17:32.338 [2024-10-29 11:06:37.581576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.338 [2024-10-29 11:06:37.581583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.338 [2024-10-29 11:06:37.581587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.581591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1565440) on tqpair=0x151eb00 00:17:32.338 [2024-10-29 11:06:37.581616] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:17:32.338 00:17:32.338 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:32.338 [2024-10-29 11:06:37.625218] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:17:32.338 [2024-10-29 11:06:37.625278] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88894 ] 00:17:32.338 [2024-10-29 11:06:37.776956] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:32.338 [2024-10-29 11:06:37.777033] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:32.338 [2024-10-29 11:06:37.777040] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:32.338 [2024-10-29 11:06:37.777051] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:32.338 [2024-10-29 11:06:37.777060] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:32.338 [2024-10-29 11:06:37.777351] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:32.338 [2024-10-29 11:06:37.781496] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19a8b00 0 00:17:32.338 [2024-10-29 11:06:37.781544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:32.338 [2024-10-29 11:06:37.781553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:32.338 [2024-10-29 11:06:37.781558] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:32.338 [2024-10-29 11:06:37.781561] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:32.338 [2024-10-29 11:06:37.781587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.781593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.781597] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a8b00) 00:17:32.338 [2024-10-29 11:06:37.781609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:32.338 [2024-10-29 11:06:37.781634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eefc0, cid 0, qid 0 00:17:32.338 [2024-10-29 11:06:37.789578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.338 [2024-10-29 11:06:37.789601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.338 [2024-10-29 11:06:37.789623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.789628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eefc0) on tqpair=0x19a8b00 00:17:32.338 [2024-10-29 11:06:37.789643] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:32.338 [2024-10-29 11:06:37.789651] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:32.338 [2024-10-29 11:06:37.789657] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:32.338 [2024-10-29 11:06:37.789674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.789679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.789683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a8b00) 00:17:32.338 [2024-10-29 11:06:37.789692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.338 [2024-10-29 11:06:37.789720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eefc0, cid 0, qid 0 00:17:32.338 [2024-10-29 11:06:37.789807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.338 [2024-10-29 11:06:37.789814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.338 [2024-10-29 11:06:37.789818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.789822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eefc0) on tqpair=0x19a8b00 00:17:32.338 [2024-10-29 11:06:37.789828] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:32.338 [2024-10-29 11:06:37.789835] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:32.338 [2024-10-29 11:06:37.789843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.789847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.789850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a8b00) 00:17:32.338 [2024-10-29 11:06:37.789874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.338 [2024-10-29 11:06:37.789909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eefc0, cid 0, qid 0 00:17:32.338 [2024-10-29 11:06:37.789955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.338 [2024-10-29 11:06:37.789962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.338 [2024-10-29 11:06:37.789966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.789970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eefc0) on tqpair=0x19a8b00 00:17:32.338 [2024-10-29 11:06:37.789976] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:32.338 [2024-10-29 11:06:37.789985] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:32.338 [2024-10-29 11:06:37.789992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.789997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.790000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a8b00) 00:17:32.338 [2024-10-29 11:06:37.790008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.338 [2024-10-29 11:06:37.790026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eefc0, cid 0, qid 0 00:17:32.338 [2024-10-29 11:06:37.790066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.338 [2024-10-29 11:06:37.790073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.338 [2024-10-29 11:06:37.790077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.790081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eefc0) on tqpair=0x19a8b00 00:17:32.338 [2024-10-29 11:06:37.790086] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:32.338 [2024-10-29 11:06:37.790101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.790107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.790110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a8b00) 00:17:32.338 [2024-10-29 11:06:37.790118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.338 [2024-10-29 11:06:37.790136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eefc0, cid 0, qid 0 00:17:32.338 [2024-10-29 11:06:37.790182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.338 [2024-10-29 11:06:37.790189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.338 [2024-10-29 11:06:37.790193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.338 [2024-10-29 11:06:37.790197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eefc0) on tqpair=0x19a8b00 00:17:32.338 [2024-10-29 11:06:37.790202] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:32.338 [2024-10-29 11:06:37.790207] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:32.338 [2024-10-29 11:06:37.790215] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:32.338 [2024-10-29 11:06:37.790325] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:32.339 [2024-10-29 11:06:37.790331] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:32.339 [2024-10-29 11:06:37.790340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a8b00) 00:17:32.339 [2024-10-29 11:06:37.790372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.339 [2024-10-29 11:06:37.790391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eefc0, cid 0, qid 0 00:17:32.339 [2024-10-29 11:06:37.790449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.339 [2024-10-29 11:06:37.790456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.339 [2024-10-29 11:06:37.790460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eefc0) on tqpair=0x19a8b00 00:17:32.339 [2024-10-29 11:06:37.790470] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:32.339 [2024-10-29 11:06:37.790503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a8b00) 00:17:32.339 [2024-10-29 11:06:37.790522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.339 [2024-10-29 11:06:37.790544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eefc0, cid 0, qid 0 00:17:32.339 [2024-10-29 11:06:37.790599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.339 [2024-10-29 11:06:37.790607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.339 [2024-10-29 11:06:37.790611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eefc0) on tqpair=0x19a8b00 00:17:32.339 [2024-10-29 11:06:37.790621] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:32.339 [2024-10-29 11:06:37.790627] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.790635] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:32.339 [2024-10-29 11:06:37.790651] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.790661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a8b00) 00:17:32.339 [2024-10-29 11:06:37.790674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.339 [2024-10-29 11:06:37.790695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eefc0, cid 0, qid 0 00:17:32.339 [2024-10-29 11:06:37.790780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.339 [2024-10-29 11:06:37.790787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.339 [2024-10-29 11:06:37.790791] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790796] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a8b00): datao=0, datal=4096, cccid=0 00:17:32.339 [2024-10-29 11:06:37.790801] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19eefc0) on tqpair(0x19a8b00): expected_datao=0, payload_size=4096 00:17:32.339 [2024-10-29 11:06:37.790806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790815] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790819] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.339 [2024-10-29 11:06:37.790835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.339 [2024-10-29 11:06:37.790844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eefc0) on tqpair=0x19a8b00 00:17:32.339 [2024-10-29 11:06:37.790858] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:32.339 [2024-10-29 11:06:37.790863] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:32.339 [2024-10-29 11:06:37.790868] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:32.339 [2024-10-29 11:06:37.790873] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:32.339 [2024-10-29 11:06:37.790879] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:32.339 [2024-10-29 11:06:37.790884] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.790893] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.790901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.790910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a8b00) 00:17:32.339 [2024-10-29 11:06:37.790918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.339 [2024-10-29 11:06:37.790939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eefc0, cid 0, qid 0 00:17:32.339 [2024-10-29 11:06:37.790989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.339 [2024-10-29 11:06:37.790996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.339 [2024-10-29 11:06:37.791000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eefc0) on tqpair=0x19a8b00 00:17:32.339 [2024-10-29 11:06:37.791017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a8b00) 00:17:32.339 [2024-10-29 11:06:37.791033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.339 [2024-10-29 11:06:37.791040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19a8b00) 00:17:32.339 [2024-10-29 11:06:37.791069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.339 [2024-10-29 11:06:37.791076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19a8b00) 00:17:32.339 [2024-10-29 11:06:37.791090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.339 [2024-10-29 11:06:37.791097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.339 [2024-10-29 11:06:37.791111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.339 [2024-10-29 11:06:37.791117] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.791126] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.791133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a8b00) 00:17:32.339 [2024-10-29 11:06:37.791145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.339 [2024-10-29 11:06:37.791166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19eefc0, cid 0, qid 0 00:17:32.339 [2024-10-29 11:06:37.791173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef140, cid 1, qid 0 00:17:32.339 [2024-10-29 11:06:37.791194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef2c0, cid 2, qid 0 00:17:32.339 [2024-10-29 11:06:37.791200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.339 [2024-10-29 11:06:37.791205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef5c0, cid 4, qid 0 00:17:32.339 [2024-10-29 11:06:37.791295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.339 [2024-10-29 11:06:37.791302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.339 [2024-10-29 11:06:37.791306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef5c0) on tqpair=0x19a8b00 00:17:32.339 [2024-10-29 11:06:37.791320] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:32.339 [2024-10-29 11:06:37.791327] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.791336] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.791343] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.791350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a8b00) 00:17:32.339 [2024-10-29 11:06:37.791366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.339 [2024-10-29 11:06:37.791386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef5c0, cid 4, qid 0 00:17:32.339 [2024-10-29 11:06:37.791453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.339 [2024-10-29 11:06:37.791462] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.339 [2024-10-29 11:06:37.791466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef5c0) on tqpair=0x19a8b00 00:17:32.339 [2024-10-29 11:06:37.791538] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.791551] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:32.339 [2024-10-29 11:06:37.791560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.339 [2024-10-29 11:06:37.791564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a8b00) 00:17:32.340 [2024-10-29 11:06:37.791572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.340 [2024-10-29 11:06:37.791594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef5c0, cid 4, qid 0 00:17:32.340 [2024-10-29 11:06:37.791655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.340 [2024-10-29 11:06:37.791663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.340 [2024-10-29 11:06:37.791667] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791671] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a8b00): datao=0, datal=4096, cccid=4 00:17:32.340 [2024-10-29 11:06:37.791677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ef5c0) on tqpair(0x19a8b00): expected_datao=0, payload_size=4096 00:17:32.340 [2024-10-29 11:06:37.791681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791689] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791694] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.340 [2024-10-29 11:06:37.791709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.340 [2024-10-29 11:06:37.791713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef5c0) on tqpair=0x19a8b00 00:17:32.340 [2024-10-29 11:06:37.791728] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:32.340 [2024-10-29 11:06:37.791740] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.791752] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.791760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a8b00) 00:17:32.340 [2024-10-29 11:06:37.791773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.340 [2024-10-29 11:06:37.791794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef5c0, cid 4, qid 0 00:17:32.340 [2024-10-29 11:06:37.791893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.340 [2024-10-29 11:06:37.791900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.340 [2024-10-29 11:06:37.791905] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791909] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a8b00): datao=0, datal=4096, cccid=4 00:17:32.340 [2024-10-29 11:06:37.791914] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ef5c0) on tqpair(0x19a8b00): expected_datao=0, payload_size=4096 00:17:32.340 [2024-10-29 11:06:37.791919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791926] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791930] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.340 [2024-10-29 11:06:37.791946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.340 [2024-10-29 11:06:37.791950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef5c0) on tqpair=0x19a8b00 00:17:32.340 [2024-10-29 11:06:37.791970] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.791982] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.791991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.791996] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a8b00) 00:17:32.340 [2024-10-29 11:06:37.792004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.340 [2024-10-29 11:06:37.792025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef5c0, cid 4, qid 0 00:17:32.340 [2024-10-29 11:06:37.792088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.340 [2024-10-29 11:06:37.792095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.340 [2024-10-29 11:06:37.792099] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792103] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a8b00): datao=0, datal=4096, cccid=4 00:17:32.340 [2024-10-29 11:06:37.792108] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ef5c0) on tqpair(0x19a8b00): expected_datao=0, payload_size=4096 00:17:32.340 [2024-10-29 11:06:37.792113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792120] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792124] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.340 [2024-10-29 11:06:37.792140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.340 [2024-10-29 11:06:37.792144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef5c0) on tqpair=0x19a8b00 00:17:32.340 [2024-10-29 11:06:37.792158] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.792168] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.792181] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.792189] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.792194] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.792200] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.792206] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:32.340 [2024-10-29 11:06:37.792211] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:32.340 [2024-10-29 11:06:37.792217] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:32.340 [2024-10-29 11:06:37.792233] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792238] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a8b00) 00:17:32.340 [2024-10-29 11:06:37.792246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.340 [2024-10-29 11:06:37.792253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a8b00) 00:17:32.340 [2024-10-29 11:06:37.792269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.340 [2024-10-29 11:06:37.792294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef5c0, cid 4, qid 0 00:17:32.340 [2024-10-29 11:06:37.792303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef740, cid 5, qid 0 00:17:32.340 [2024-10-29 11:06:37.792390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.340 [2024-10-29 11:06:37.792401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.340 [2024-10-29 11:06:37.792405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef5c0) on tqpair=0x19a8b00 00:17:32.340 [2024-10-29 11:06:37.792417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.340 [2024-10-29 11:06:37.792424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.340 [2024-10-29 11:06:37.792427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef740) on tqpair=0x19a8b00 00:17:32.340 [2024-10-29 11:06:37.792443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a8b00) 00:17:32.340 [2024-10-29 11:06:37.792456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.340 [2024-10-29 11:06:37.792478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef740, cid 5, qid 0 00:17:32.340 [2024-10-29 11:06:37.792528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.340 [2024-10-29 11:06:37.792535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.340 [2024-10-29 11:06:37.792539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef740) on tqpair=0x19a8b00 00:17:32.340 [2024-10-29 11:06:37.792555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a8b00) 00:17:32.340 [2024-10-29 11:06:37.792567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.340 [2024-10-29 11:06:37.792586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef740, cid 5, qid 0 00:17:32.340 [2024-10-29 11:06:37.792652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.340 [2024-10-29 11:06:37.792659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.340 [2024-10-29 11:06:37.792663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef740) on tqpair=0x19a8b00 00:17:32.340 [2024-10-29 11:06:37.792678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a8b00) 00:17:32.340 [2024-10-29 11:06:37.792690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.340 [2024-10-29 11:06:37.792709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef740, cid 5, qid 0 00:17:32.340 [2024-10-29 11:06:37.792760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.340 [2024-10-29 11:06:37.792767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.340 [2024-10-29 11:06:37.792771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef740) on tqpair=0x19a8b00 00:17:32.340 [2024-10-29 11:06:37.792794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a8b00) 00:17:32.340 [2024-10-29 11:06:37.792823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.340 [2024-10-29 11:06:37.792830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.340 [2024-10-29 11:06:37.792835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a8b00) 00:17:32.341 [2024-10-29 11:06:37.792841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.341 [2024-10-29 11:06:37.792863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.792867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19a8b00) 00:17:32.341 [2024-10-29 11:06:37.792874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.341 [2024-10-29 11:06:37.792884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.792888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19a8b00) 00:17:32.341 [2024-10-29 11:06:37.792894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.341 [2024-10-29 11:06:37.792914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef740, cid 5, qid 0 00:17:32.341 [2024-10-29 11:06:37.792922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef5c0, cid 4, qid 0 00:17:32.341 [2024-10-29 11:06:37.792927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef8c0, cid 6, qid 0 00:17:32.341 [2024-10-29 11:06:37.792931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19efa40, cid 7, qid 0 00:17:32.341 [2024-10-29 11:06:37.793059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.341 [2024-10-29 11:06:37.793066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.341 [2024-10-29 11:06:37.793070] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793074] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a8b00): datao=0, datal=8192, cccid=5 00:17:32.341 [2024-10-29 11:06:37.793078] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ef740) on tqpair(0x19a8b00): expected_datao=0, payload_size=8192 00:17:32.341 [2024-10-29 11:06:37.793083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793115] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793120] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.341 [2024-10-29 11:06:37.793132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.341 [2024-10-29 11:06:37.793136] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793140] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a8b00): datao=0, datal=512, cccid=4 00:17:32.341 [2024-10-29 11:06:37.793145] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ef5c0) on tqpair(0x19a8b00): expected_datao=0, payload_size=512 00:17:32.341 [2024-10-29 11:06:37.793149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793156] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793160] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.341 [2024-10-29 11:06:37.793171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.341 [2024-10-29 11:06:37.793175] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793179] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a8b00): datao=0, datal=512, cccid=6 00:17:32.341 [2024-10-29 11:06:37.793184] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19ef8c0) on tqpair(0x19a8b00): expected_datao=0, payload_size=512 00:17:32.341 [2024-10-29 11:06:37.793188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793195] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793198] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.341 [2024-10-29 11:06:37.793210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.341 [2024-10-29 11:06:37.793214] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793218] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a8b00): datao=0, datal=4096, cccid=7 00:17:32.341 [2024-10-29 11:06:37.793222] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19efa40) on tqpair(0x19a8b00): expected_datao=0, payload_size=4096 00:17:32.341 [2024-10-29 11:06:37.793227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793233] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793237] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.341 [2024-10-29 11:06:37.793252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.341 [2024-10-29 11:06:37.793256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef740) on tqpair=0x19a8b00 00:17:32.341 [2024-10-29 11:06:37.793275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.341 [2024-10-29 11:06:37.793282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.341 [2024-10-29 11:06:37.793286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef5c0) on tqpair=0x19a8b00 00:17:32.341 [2024-10-29 11:06:37.793302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.341 [2024-10-29 11:06:37.793308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.341 [2024-10-29 11:06:37.793312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef8c0) on tqpair=0x19a8b00 00:17:32.341 [2024-10-29 11:06:37.793324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.341 [2024-10-29 11:06:37.793330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.341 [2024-10-29 11:06:37.793334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.341 [2024-10-29 11:06:37.793338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19efa40) on tqpair=0x19a8b00 00:17:32.341 ===================================================== 00:17:32.341 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:32.341 ===================================================== 00:17:32.341 Controller Capabilities/Features 00:17:32.341 ================================ 00:17:32.341 Vendor ID: 8086 00:17:32.341 Subsystem Vendor ID: 8086 00:17:32.341 Serial Number: SPDK00000000000001 00:17:32.341 Model Number: SPDK bdev Controller 00:17:32.341 Firmware Version: 25.01 00:17:32.341 Recommended Arb Burst: 6 00:17:32.341 IEEE OUI Identifier: e4 d2 5c 00:17:32.341 Multi-path I/O 00:17:32.341 May have multiple subsystem ports: Yes 00:17:32.341 May have multiple controllers: Yes 00:17:32.341 Associated with SR-IOV VF: No 00:17:32.341 Max Data Transfer Size: 131072 00:17:32.341 Max Number of Namespaces: 32 00:17:32.341 Max Number of I/O Queues: 127 00:17:32.341 NVMe Specification Version (VS): 1.3 00:17:32.341 NVMe Specification Version (Identify): 1.3 00:17:32.341 Maximum Queue Entries: 128 00:17:32.341 Contiguous Queues Required: Yes 00:17:32.341 Arbitration Mechanisms Supported 00:17:32.341 Weighted Round Robin: Not Supported 00:17:32.341 Vendor Specific: Not Supported 00:17:32.341 Reset Timeout: 15000 ms 00:17:32.341 Doorbell Stride: 4 bytes 00:17:32.341 NVM Subsystem Reset: Not Supported 00:17:32.341 Command Sets Supported 00:17:32.341 NVM Command Set: Supported 00:17:32.341 Boot Partition: Not Supported 00:17:32.341 Memory Page Size Minimum: 4096 bytes 00:17:32.341 Memory Page Size Maximum: 4096 bytes 00:17:32.341 Persistent Memory Region: Not Supported 00:17:32.341 Optional Asynchronous Events Supported 00:17:32.341 Namespace Attribute Notices: Supported 00:17:32.341 Firmware Activation Notices: Not Supported 00:17:32.341 ANA Change Notices: Not Supported 00:17:32.341 PLE Aggregate Log Change Notices: Not Supported 00:17:32.341 LBA Status Info Alert Notices: Not Supported 00:17:32.341 EGE Aggregate Log Change Notices: Not Supported 00:17:32.341 Normal NVM Subsystem Shutdown event: Not Supported 00:17:32.341 Zone Descriptor Change Notices: Not Supported 00:17:32.341 Discovery Log Change Notices: Not Supported 00:17:32.341 Controller Attributes 00:17:32.341 128-bit Host Identifier: Supported 00:17:32.341 Non-Operational Permissive Mode: Not Supported 00:17:32.341 NVM Sets: Not Supported 00:17:32.341 Read Recovery Levels: Not Supported 00:17:32.341 Endurance Groups: Not Supported 00:17:32.341 Predictable Latency Mode: Not Supported 00:17:32.341 Traffic Based Keep ALive: Not Supported 00:17:32.341 Namespace Granularity: Not Supported 00:17:32.341 SQ Associations: Not Supported 00:17:32.341 UUID List: Not Supported 00:17:32.341 Multi-Domain Subsystem: Not Supported 00:17:32.341 Fixed Capacity Management: Not Supported 00:17:32.341 Variable Capacity Management: Not Supported 00:17:32.341 Delete Endurance Group: Not Supported 00:17:32.341 Delete NVM Set: Not Supported 00:17:32.341 Extended LBA Formats Supported: Not Supported 00:17:32.341 Flexible Data Placement Supported: Not Supported 00:17:32.341 00:17:32.341 Controller Memory Buffer Support 00:17:32.341 ================================ 00:17:32.341 Supported: No 00:17:32.341 00:17:32.341 Persistent Memory Region Support 00:17:32.341 ================================ 00:17:32.341 Supported: No 00:17:32.341 00:17:32.341 Admin Command Set Attributes 00:17:32.341 ============================ 00:17:32.341 Security Send/Receive: Not Supported 00:17:32.341 Format NVM: Not Supported 00:17:32.341 Firmware Activate/Download: Not Supported 00:17:32.341 Namespace Management: Not Supported 00:17:32.341 Device Self-Test: Not Supported 00:17:32.341 Directives: Not Supported 00:17:32.341 NVMe-MI: Not Supported 00:17:32.341 Virtualization Management: Not Supported 00:17:32.341 Doorbell Buffer Config: Not Supported 00:17:32.341 Get LBA Status Capability: Not Supported 00:17:32.341 Command & Feature Lockdown Capability: Not Supported 00:17:32.341 Abort Command Limit: 4 00:17:32.341 Async Event Request Limit: 4 00:17:32.341 Number of Firmware Slots: N/A 00:17:32.341 Firmware Slot 1 Read-Only: N/A 00:17:32.341 Firmware Activation Without Reset: N/A 00:17:32.341 Multiple Update Detection Support: N/A 00:17:32.342 Firmware Update Granularity: No Information Provided 00:17:32.342 Per-Namespace SMART Log: No 00:17:32.342 Asymmetric Namespace Access Log Page: Not Supported 00:17:32.342 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:32.342 Command Effects Log Page: Supported 00:17:32.342 Get Log Page Extended Data: Supported 00:17:32.342 Telemetry Log Pages: Not Supported 00:17:32.342 Persistent Event Log Pages: Not Supported 00:17:32.342 Supported Log Pages Log Page: May Support 00:17:32.342 Commands Supported & Effects Log Page: Not Supported 00:17:32.342 Feature Identifiers & Effects Log Page:May Support 00:17:32.342 NVMe-MI Commands & Effects Log Page: May Support 00:17:32.342 Data Area 4 for Telemetry Log: Not Supported 00:17:32.342 Error Log Page Entries Supported: 128 00:17:32.342 Keep Alive: Supported 00:17:32.342 Keep Alive Granularity: 10000 ms 00:17:32.342 00:17:32.342 NVM Command Set Attributes 00:17:32.342 ========================== 00:17:32.342 Submission Queue Entry Size 00:17:32.342 Max: 64 00:17:32.342 Min: 64 00:17:32.342 Completion Queue Entry Size 00:17:32.342 Max: 16 00:17:32.342 Min: 16 00:17:32.342 Number of Namespaces: 32 00:17:32.342 Compare Command: Supported 00:17:32.342 Write Uncorrectable Command: Not Supported 00:17:32.342 Dataset Management Command: Supported 00:17:32.342 Write Zeroes Command: Supported 00:17:32.342 Set Features Save Field: Not Supported 00:17:32.342 Reservations: Supported 00:17:32.342 Timestamp: Not Supported 00:17:32.342 Copy: Supported 00:17:32.342 Volatile Write Cache: Present 00:17:32.342 Atomic Write Unit (Normal): 1 00:17:32.342 Atomic Write Unit (PFail): 1 00:17:32.342 Atomic Compare & Write Unit: 1 00:17:32.342 Fused Compare & Write: Supported 00:17:32.342 Scatter-Gather List 00:17:32.342 SGL Command Set: Supported 00:17:32.342 SGL Keyed: Supported 00:17:32.342 SGL Bit Bucket Descriptor: Not Supported 00:17:32.342 SGL Metadata Pointer: Not Supported 00:17:32.342 Oversized SGL: Not Supported 00:17:32.342 SGL Metadata Address: Not Supported 00:17:32.342 SGL Offset: Supported 00:17:32.342 Transport SGL Data Block: Not Supported 00:17:32.342 Replay Protected Memory Block: Not Supported 00:17:32.342 00:17:32.342 Firmware Slot Information 00:17:32.342 ========================= 00:17:32.342 Active slot: 1 00:17:32.342 Slot 1 Firmware Revision: 25.01 00:17:32.342 00:17:32.342 00:17:32.342 Commands Supported and Effects 00:17:32.342 ============================== 00:17:32.342 Admin Commands 00:17:32.342 -------------- 00:17:32.342 Get Log Page (02h): Supported 00:17:32.342 Identify (06h): Supported 00:17:32.342 Abort (08h): Supported 00:17:32.342 Set Features (09h): Supported 00:17:32.342 Get Features (0Ah): Supported 00:17:32.342 Asynchronous Event Request (0Ch): Supported 00:17:32.342 Keep Alive (18h): Supported 00:17:32.342 I/O Commands 00:17:32.342 ------------ 00:17:32.342 Flush (00h): Supported LBA-Change 00:17:32.342 Write (01h): Supported LBA-Change 00:17:32.342 Read (02h): Supported 00:17:32.342 Compare (05h): Supported 00:17:32.342 Write Zeroes (08h): Supported LBA-Change 00:17:32.342 Dataset Management (09h): Supported LBA-Change 00:17:32.342 Copy (19h): Supported LBA-Change 00:17:32.342 00:17:32.342 Error Log 00:17:32.342 ========= 00:17:32.342 00:17:32.342 Arbitration 00:17:32.342 =========== 00:17:32.342 Arbitration Burst: 1 00:17:32.342 00:17:32.342 Power Management 00:17:32.342 ================ 00:17:32.342 Number of Power States: 1 00:17:32.342 Current Power State: Power State #0 00:17:32.342 Power State #0: 00:17:32.342 Max Power: 0.00 W 00:17:32.342 Non-Operational State: Operational 00:17:32.342 Entry Latency: Not Reported 00:17:32.342 Exit Latency: Not Reported 00:17:32.342 Relative Read Throughput: 0 00:17:32.342 Relative Read Latency: 0 00:17:32.342 Relative Write Throughput: 0 00:17:32.342 Relative Write Latency: 0 00:17:32.342 Idle Power: Not Reported 00:17:32.342 Active Power: Not Reported 00:17:32.342 Non-Operational Permissive Mode: Not Supported 00:17:32.342 00:17:32.342 Health Information 00:17:32.342 ================== 00:17:32.342 Critical Warnings: 00:17:32.342 Available Spare Space: OK 00:17:32.342 Temperature: OK 00:17:32.342 Device Reliability: OK 00:17:32.342 Read Only: No 00:17:32.342 Volatile Memory Backup: OK 00:17:32.342 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:32.342 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:32.342 Available Spare: 0% 00:17:32.342 Available Spare Threshold: 0% 00:17:32.342 Life Percentage Used:[2024-10-29 11:06:37.797479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.342 [2024-10-29 11:06:37.797491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19a8b00) 00:17:32.342 [2024-10-29 11:06:37.797500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.342 [2024-10-29 11:06:37.797530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19efa40, cid 7, qid 0 00:17:32.342 [2024-10-29 11:06:37.797586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.342 [2024-10-29 11:06:37.797594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.342 [2024-10-29 11:06:37.797598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.342 [2024-10-29 11:06:37.797603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19efa40) on tqpair=0x19a8b00 00:17:32.342 [2024-10-29 11:06:37.797649] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:32.342 [2024-10-29 11:06:37.797663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19eefc0) on tqpair=0x19a8b00 00:17:32.342 [2024-10-29 11:06:37.797671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.342 [2024-10-29 11:06:37.797677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef140) on tqpair=0x19a8b00 00:17:32.342 [2024-10-29 11:06:37.797682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.342 [2024-10-29 11:06:37.797687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef2c0) on tqpair=0x19a8b00 00:17:32.342 [2024-10-29 11:06:37.797692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.342 [2024-10-29 11:06:37.797698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.342 [2024-10-29 11:06:37.797702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.342 [2024-10-29 11:06:37.797712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.342 [2024-10-29 11:06:37.797717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.342 [2024-10-29 11:06:37.797721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.342 [2024-10-29 11:06:37.797729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.342 [2024-10-29 11:06:37.797754] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.342 [2024-10-29 11:06:37.797811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.342 [2024-10-29 11:06:37.797817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.342 [2024-10-29 11:06:37.797821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.342 [2024-10-29 11:06:37.797825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.342 [2024-10-29 11:06:37.797833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.342 [2024-10-29 11:06:37.797837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.342 [2024-10-29 11:06:37.797841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.342 [2024-10-29 11:06:37.797848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.342 [2024-10-29 11:06:37.797875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.342 [2024-10-29 11:06:37.797938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.342 [2024-10-29 11:06:37.797945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.342 [2024-10-29 11:06:37.797949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.342 [2024-10-29 11:06:37.797953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.342 [2024-10-29 11:06:37.797958] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:32.342 [2024-10-29 11:06:37.797963] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:32.342 [2024-10-29 11:06:37.797972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.342 [2024-10-29 11:06:37.797977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.342 [2024-10-29 11:06:37.797981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.342 [2024-10-29 11:06:37.797988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.798005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.798048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.798054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.798058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.798073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.798089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.798105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.798148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.798155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.798159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.798173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.798188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.798205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.798246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.798253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.798256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.798271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.798286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.798303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.798346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.798353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.798356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.798371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.798430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.798453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.798503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.798511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.798515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.798530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.798547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.798565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.798612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.798620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.798624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.798639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.798656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.798673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.798719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.798726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.798730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.798745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.798776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.798808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.798851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.798857] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.798861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.798875] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.798890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.798907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.798947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.798954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.798958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.798972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.798980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.798987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.799004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.799047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.799053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.799057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.799061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.799071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.799076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.799080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.799087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.799104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.799147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.799153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.799157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.799161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.799171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.799176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.799180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.799187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.799203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.799244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.343 [2024-10-29 11:06:37.799250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.343 [2024-10-29 11:06:37.799254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.799259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.343 [2024-10-29 11:06:37.799269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.799273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.343 [2024-10-29 11:06:37.799277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.343 [2024-10-29 11:06:37.799284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.343 [2024-10-29 11:06:37.799301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.343 [2024-10-29 11:06:37.799344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.799351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.799354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.799368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.799384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.799402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.799475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.799484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.799488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.799503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.799519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.799539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.799583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.799590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.799594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.799609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.799625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.799643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.799690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.799697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.799701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.799716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.799732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.799749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.799807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.799813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.799817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.799831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.799847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.799864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.799908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.799915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.799919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.799933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.799941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.799948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.799965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.800011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.800017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.800021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.800035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.800050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.800067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.800111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.800118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.800121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.800136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.800151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.800168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.800211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.800223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.800227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.800242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.800258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.800275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.800316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.800322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.800326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.800366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.800384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.800418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.800464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.800471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.800475] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.800491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.800507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.800526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.800569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.800576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.800580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.800596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.344 [2024-10-29 11:06:37.800612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.344 [2024-10-29 11:06:37.800630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.344 [2024-10-29 11:06:37.800672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.344 [2024-10-29 11:06:37.800679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.344 [2024-10-29 11:06:37.800683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.344 [2024-10-29 11:06:37.800713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.344 [2024-10-29 11:06:37.800732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.800736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.345 [2024-10-29 11:06:37.800743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.345 [2024-10-29 11:06:37.800759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.345 [2024-10-29 11:06:37.800802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.345 [2024-10-29 11:06:37.800809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.345 [2024-10-29 11:06:37.800812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.800816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.345 [2024-10-29 11:06:37.800827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.800831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.800835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.345 [2024-10-29 11:06:37.800842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.345 [2024-10-29 11:06:37.800859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.345 [2024-10-29 11:06:37.800899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.345 [2024-10-29 11:06:37.800906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.345 [2024-10-29 11:06:37.800909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.800913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.345 [2024-10-29 11:06:37.800924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.800928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.800932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.345 [2024-10-29 11:06:37.800940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.345 [2024-10-29 11:06:37.800956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.345 [2024-10-29 11:06:37.800996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.345 [2024-10-29 11:06:37.801003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.345 [2024-10-29 11:06:37.801007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.345 [2024-10-29 11:06:37.801021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.345 [2024-10-29 11:06:37.801037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.345 [2024-10-29 11:06:37.801053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.345 [2024-10-29 11:06:37.801094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.345 [2024-10-29 11:06:37.801101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.345 [2024-10-29 11:06:37.801105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.345 [2024-10-29 11:06:37.801119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.345 [2024-10-29 11:06:37.801135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.345 [2024-10-29 11:06:37.801151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.345 [2024-10-29 11:06:37.801198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.345 [2024-10-29 11:06:37.801204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.345 [2024-10-29 11:06:37.801208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.345 [2024-10-29 11:06:37.801222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.345 [2024-10-29 11:06:37.801238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.345 [2024-10-29 11:06:37.801254] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.345 [2024-10-29 11:06:37.801297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.345 [2024-10-29 11:06:37.801303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.345 [2024-10-29 11:06:37.801307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.345 [2024-10-29 11:06:37.801322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.801330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.345 [2024-10-29 11:06:37.801337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.345 [2024-10-29 11:06:37.801354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.345 [2024-10-29 11:06:37.801395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.345 [2024-10-29 11:06:37.805438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.345 [2024-10-29 11:06:37.805447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.805452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.345 [2024-10-29 11:06:37.805467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.805473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.805477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a8b00) 00:17:32.345 [2024-10-29 11:06:37.805485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.345 [2024-10-29 11:06:37.805511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19ef440, cid 3, qid 0 00:17:32.345 [2024-10-29 11:06:37.805565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.345 [2024-10-29 11:06:37.805572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.345 [2024-10-29 11:06:37.805575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.345 [2024-10-29 11:06:37.805580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19ef440) on tqpair=0x19a8b00 00:17:32.345 [2024-10-29 11:06:37.805588] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:17:32.604 0% 00:17:32.604 Data Units Read: 0 00:17:32.604 Data Units Written: 0 00:17:32.604 Host Read Commands: 0 00:17:32.604 Host Write Commands: 0 00:17:32.604 Controller Busy Time: 0 minutes 00:17:32.604 Power Cycles: 0 00:17:32.604 Power On Hours: 0 hours 00:17:32.604 Unsafe Shutdowns: 0 00:17:32.604 Unrecoverable Media Errors: 0 00:17:32.604 Lifetime Error Log Entries: 0 00:17:32.604 Warning Temperature Time: 0 minutes 00:17:32.604 Critical Temperature Time: 0 minutes 00:17:32.604 00:17:32.604 Number of Queues 00:17:32.604 ================ 00:17:32.604 Number of I/O Submission Queues: 127 00:17:32.604 Number of I/O Completion Queues: 127 00:17:32.604 00:17:32.604 Active Namespaces 00:17:32.604 ================= 00:17:32.604 Namespace ID:1 00:17:32.604 Error Recovery Timeout: Unlimited 00:17:32.604 Command Set Identifier: NVM (00h) 00:17:32.604 Deallocate: Supported 00:17:32.604 Deallocated/Unwritten Error: Not Supported 00:17:32.604 Deallocated Read Value: Unknown 00:17:32.604 Deallocate in Write Zeroes: Not Supported 00:17:32.604 Deallocated Guard Field: 0xFFFF 00:17:32.604 Flush: Supported 00:17:32.604 Reservation: Supported 00:17:32.604 Namespace Sharing Capabilities: Multiple Controllers 00:17:32.604 Size (in LBAs): 131072 (0GiB) 00:17:32.604 Capacity (in LBAs): 131072 (0GiB) 00:17:32.604 Utilization (in LBAs): 131072 (0GiB) 00:17:32.604 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:32.604 EUI64: ABCDEF0123456789 00:17:32.604 UUID: 7f561351-2e29-4ff4-9723-a287aa08c611 00:17:32.604 Thin Provisioning: Not Supported 00:17:32.604 Per-NS Atomic Units: Yes 00:17:32.604 Atomic Boundary Size (Normal): 0 00:17:32.604 Atomic Boundary Size (PFail): 0 00:17:32.604 Atomic Boundary Offset: 0 00:17:32.604 Maximum Single Source Range Length: 65535 00:17:32.604 Maximum Copy Length: 65535 00:17:32.604 Maximum Source Range Count: 1 00:17:32.604 NGUID/EUI64 Never Reused: No 00:17:32.604 Namespace Write Protected: No 00:17:32.604 Number of LBA Formats: 1 00:17:32.604 Current LBA Format: LBA Format #00 00:17:32.604 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:32.604 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:32.604 rmmod nvme_tcp 00:17:32.604 rmmod nvme_fabrics 00:17:32.604 rmmod nvme_keyring 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 88860 ']' 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 88860 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 88860 ']' 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 88860 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88860 00:17:32.604 killing process with pid 88860 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88860' 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 88860 00:17:32.604 11:06:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 88860 00:17:32.604 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:32.604 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:32.604 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:32.604 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:32.604 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:32.604 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:32.604 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:32.604 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:32.604 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:32.604 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:32.862 ************************************ 00:17:32.862 END TEST nvmf_identify 00:17:32.862 ************************************ 00:17:32.862 00:17:32.862 real 0m2.013s 00:17:32.862 user 0m4.007s 00:17:32.862 sys 0m0.664s 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:32.862 11:06:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.161 ************************************ 00:17:33.161 START TEST nvmf_perf 00:17:33.161 ************************************ 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:33.161 * Looking for test storage... 00:17:33.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:33.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.161 --rc genhtml_branch_coverage=1 00:17:33.161 --rc genhtml_function_coverage=1 00:17:33.161 --rc genhtml_legend=1 00:17:33.161 --rc geninfo_all_blocks=1 00:17:33.161 --rc geninfo_unexecuted_blocks=1 00:17:33.161 00:17:33.161 ' 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:33.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.161 --rc genhtml_branch_coverage=1 00:17:33.161 --rc genhtml_function_coverage=1 00:17:33.161 --rc genhtml_legend=1 00:17:33.161 --rc geninfo_all_blocks=1 00:17:33.161 --rc geninfo_unexecuted_blocks=1 00:17:33.161 00:17:33.161 ' 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:33.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.161 --rc genhtml_branch_coverage=1 00:17:33.161 --rc genhtml_function_coverage=1 00:17:33.161 --rc genhtml_legend=1 00:17:33.161 --rc geninfo_all_blocks=1 00:17:33.161 --rc geninfo_unexecuted_blocks=1 00:17:33.161 00:17:33.161 ' 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:33.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.161 --rc genhtml_branch_coverage=1 00:17:33.161 --rc genhtml_function_coverage=1 00:17:33.161 --rc genhtml_legend=1 00:17:33.161 --rc geninfo_all_blocks=1 00:17:33.161 --rc geninfo_unexecuted_blocks=1 00:17:33.161 00:17:33.161 ' 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.161 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.162 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:33.162 Cannot find device "nvmf_init_br" 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:33.162 Cannot find device "nvmf_init_br2" 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:33.162 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:33.421 Cannot find device "nvmf_tgt_br" 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:33.421 Cannot find device "nvmf_tgt_br2" 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:33.421 Cannot find device "nvmf_init_br" 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:33.421 Cannot find device "nvmf_init_br2" 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:33.421 Cannot find device "nvmf_tgt_br" 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:33.421 Cannot find device "nvmf_tgt_br2" 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:33.421 Cannot find device "nvmf_br" 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:33.421 Cannot find device "nvmf_init_if" 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:33.421 Cannot find device "nvmf_init_if2" 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:33.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:33.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:33.421 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:33.681 11:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:33.681 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:33.681 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:33.681 00:17:33.681 --- 10.0.0.3 ping statistics --- 00:17:33.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.681 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:33.681 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:33.681 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:17:33.681 00:17:33.681 --- 10.0.0.4 ping statistics --- 00:17:33.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.681 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:33.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:33.681 00:17:33.681 --- 10.0.0.1 ping statistics --- 00:17:33.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.681 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:33.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:17:33.681 00:17:33.681 --- 10.0.0.2 ping statistics --- 00:17:33.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.681 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=89111 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 89111 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 89111 ']' 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:33.681 11:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:33.681 [2024-10-29 11:06:39.111062] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:17:33.681 [2024-10-29 11:06:39.111385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.941 [2024-10-29 11:06:39.255163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.941 [2024-10-29 11:06:39.274717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.941 [2024-10-29 11:06:39.275018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.941 [2024-10-29 11:06:39.275149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.941 [2024-10-29 11:06:39.275268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.941 [2024-10-29 11:06:39.275307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.941 [2024-10-29 11:06:39.278419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.941 [2024-10-29 11:06:39.278565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.941 [2024-10-29 11:06:39.279297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.941 [2024-10-29 11:06:39.279354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.941 [2024-10-29 11:06:39.308517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:34.876 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:34.876 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:17:34.876 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:34.876 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:34.876 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:34.876 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.876 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:34.876 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:35.134 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:35.134 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:35.392 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:35.392 11:06:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:35.694 11:06:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:35.694 11:06:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:35.694 11:06:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:35.694 11:06:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:35.694 11:06:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:35.952 [2024-10-29 11:06:41.429978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.210 11:06:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.210 11:06:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:36.210 11:06:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.469 11:06:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:36.469 11:06:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:36.728 11:06:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:36.987 [2024-10-29 11:06:42.467172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:36.987 11:06:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:37.246 11:06:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:37.246 11:06:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:37.246 11:06:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:37.246 11:06:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:38.625 Initializing NVMe Controllers 00:17:38.625 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:38.625 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:38.625 Initialization complete. Launching workers. 00:17:38.625 ======================================================== 00:17:38.625 Latency(us) 00:17:38.625 Device Information : IOPS MiB/s Average min max 00:17:38.625 PCIE (0000:00:10.0) NSID 1 from core 0: 22345.72 87.29 1432.42 375.75 8593.52 00:17:38.625 ======================================================== 00:17:38.625 Total : 22345.72 87.29 1432.42 375.75 8593.52 00:17:38.625 00:17:38.625 11:06:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:40.002 Initializing NVMe Controllers 00:17:40.002 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:40.002 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:40.002 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:40.002 Initialization complete. Launching workers. 00:17:40.002 ======================================================== 00:17:40.002 Latency(us) 00:17:40.002 Device Information : IOPS MiB/s Average min max 00:17:40.002 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3942.00 15.40 252.34 95.92 5146.89 00:17:40.002 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8119.77 6982.91 12036.74 00:17:40.002 ======================================================== 00:17:40.002 Total : 4066.00 15.88 492.27 95.92 12036.74 00:17:40.002 00:17:40.002 11:06:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:41.379 Initializing NVMe Controllers 00:17:41.379 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.379 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:41.379 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:41.379 Initialization complete. Launching workers. 00:17:41.379 ======================================================== 00:17:41.379 Latency(us) 00:17:41.379 Device Information : IOPS MiB/s Average min max 00:17:41.379 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9119.59 35.62 3509.52 529.26 8916.70 00:17:41.379 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3942.72 15.40 8146.66 6558.58 16968.91 00:17:41.379 ======================================================== 00:17:41.379 Total : 13062.31 51.02 4909.19 529.26 16968.91 00:17:41.379 00:17:41.379 11:06:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:41.379 11:06:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:43.910 Initializing NVMe Controllers 00:17:43.910 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:43.910 Controller IO queue size 128, less than required. 00:17:43.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:43.910 Controller IO queue size 128, less than required. 00:17:43.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:43.910 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:43.910 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:43.910 Initialization complete. Launching workers. 00:17:43.910 ======================================================== 00:17:43.910 Latency(us) 00:17:43.910 Device Information : IOPS MiB/s Average min max 00:17:43.910 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2000.15 500.04 65203.03 35900.87 104654.88 00:17:43.910 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 679.38 169.85 194133.57 62573.37 319013.24 00:17:43.910 ======================================================== 00:17:43.910 Total : 2679.54 669.88 97892.70 35900.87 319013.24 00:17:43.910 00:17:43.910 11:06:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:44.168 Initializing NVMe Controllers 00:17:44.168 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:44.168 Controller IO queue size 128, less than required. 00:17:44.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:44.168 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:44.168 Controller IO queue size 128, less than required. 00:17:44.168 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:44.168 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:44.168 WARNING: Some requested NVMe devices were skipped 00:17:44.168 No valid NVMe controllers or AIO or URING devices found 00:17:44.168 11:06:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:46.703 Initializing NVMe Controllers 00:17:46.703 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.703 Controller IO queue size 128, less than required. 00:17:46.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:46.703 Controller IO queue size 128, less than required. 00:17:46.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:46.704 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:46.704 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:46.704 Initialization complete. Launching workers. 00:17:46.704 00:17:46.704 ==================== 00:17:46.704 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:46.704 TCP transport: 00:17:46.704 polls: 12509 00:17:46.704 idle_polls: 7513 00:17:46.704 sock_completions: 4996 00:17:46.704 nvme_completions: 6881 00:17:46.704 submitted_requests: 10272 00:17:46.704 queued_requests: 1 00:17:46.704 00:17:46.704 ==================== 00:17:46.704 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:46.704 TCP transport: 00:17:46.704 polls: 12381 00:17:46.704 idle_polls: 7549 00:17:46.704 sock_completions: 4832 00:17:46.704 nvme_completions: 6861 00:17:46.704 submitted_requests: 10220 00:17:46.704 queued_requests: 1 00:17:46.704 ======================================================== 00:17:46.704 Latency(us) 00:17:46.704 Device Information : IOPS MiB/s Average min max 00:17:46.704 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1719.86 429.96 75650.94 38615.10 127079.04 00:17:46.704 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1714.86 428.71 75162.27 24910.77 129962.30 00:17:46.704 ======================================================== 00:17:46.704 Total : 3434.72 858.68 75406.96 24910.77 129962.30 00:17:46.704 00:17:46.704 11:06:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:46.704 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:46.963 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:17:46.963 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:17:46.963 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:17:47.222 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=2bc4eda7-a384-4403-a08a-814b7973570a 00:17:47.222 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 2bc4eda7-a384-4403-a08a-814b7973570a 00:17:47.222 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=2bc4eda7-a384-4403-a08a-814b7973570a 00:17:47.222 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:17:47.222 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:17:47.222 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:17:47.222 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:47.481 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:17:47.481 { 00:17:47.481 "uuid": "2bc4eda7-a384-4403-a08a-814b7973570a", 00:17:47.481 "name": "lvs_0", 00:17:47.481 "base_bdev": "Nvme0n1", 00:17:47.481 "total_data_clusters": 1278, 00:17:47.481 "free_clusters": 1278, 00:17:47.481 "block_size": 4096, 00:17:47.481 "cluster_size": 4194304 00:17:47.481 } 00:17:47.481 ]' 00:17:47.481 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="2bc4eda7-a384-4403-a08a-814b7973570a") .free_clusters' 00:17:47.481 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=1278 00:17:47.481 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="2bc4eda7-a384-4403-a08a-814b7973570a") .cluster_size' 00:17:47.481 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:17:47.481 5112 00:17:47.481 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=5112 00:17:47.481 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 5112 00:17:47.481 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:17:47.481 11:06:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2bc4eda7-a384-4403-a08a-814b7973570a lbd_0 5112 00:17:47.740 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e6e363ab-ae60-4553-ab7c-a3595d604bae 00:17:47.740 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore e6e363ab-ae60-4553-ab7c-a3595d604bae lvs_n_0 00:17:48.000 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=86091ef7-fab2-42d2-b500-8a24304ace54 00:17:48.000 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 86091ef7-fab2-42d2-b500-8a24304ace54 00:17:48.000 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local lvs_uuid=86091ef7-fab2-42d2-b500-8a24304ace54 00:17:48.000 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local lvs_info 00:17:48.000 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local fc 00:17:48.000 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local cs 00:17:48.000 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:48.258 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:17:48.258 { 00:17:48.259 "uuid": "2bc4eda7-a384-4403-a08a-814b7973570a", 00:17:48.259 "name": "lvs_0", 00:17:48.259 "base_bdev": "Nvme0n1", 00:17:48.259 "total_data_clusters": 1278, 00:17:48.259 "free_clusters": 0, 00:17:48.259 "block_size": 4096, 00:17:48.259 "cluster_size": 4194304 00:17:48.259 }, 00:17:48.259 { 00:17:48.259 "uuid": "86091ef7-fab2-42d2-b500-8a24304ace54", 00:17:48.259 "name": "lvs_n_0", 00:17:48.259 "base_bdev": "e6e363ab-ae60-4553-ab7c-a3595d604bae", 00:17:48.259 "total_data_clusters": 1276, 00:17:48.259 "free_clusters": 1276, 00:17:48.259 "block_size": 4096, 00:17:48.259 "cluster_size": 4194304 00:17:48.259 } 00:17:48.259 ]' 00:17:48.259 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="86091ef7-fab2-42d2-b500-8a24304ace54") .free_clusters' 00:17:48.518 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # fc=1276 00:17:48.518 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="86091ef7-fab2-42d2-b500-8a24304ace54") .cluster_size' 00:17:48.518 5104 00:17:48.518 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # cs=4194304 00:17:48.518 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1375 -- # free_mb=5104 00:17:48.518 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1376 -- # echo 5104 00:17:48.518 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:17:48.518 11:06:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 86091ef7-fab2-42d2-b500-8a24304ace54 lbd_nest_0 5104 00:17:48.776 11:06:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a7beda7a-ec60-4a5e-aa6f-7c3b2889acd3 00:17:48.776 11:06:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.036 11:06:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:17:49.036 11:06:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a7beda7a-ec60-4a5e-aa6f-7c3b2889acd3 00:17:49.295 11:06:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:49.580 11:06:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:17:49.580 11:06:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:17:49.580 11:06:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:49.580 11:06:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:49.580 11:06:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:49.846 Initializing NVMe Controllers 00:17:49.846 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.846 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:49.846 WARNING: Some requested NVMe devices were skipped 00:17:49.846 No valid NVMe controllers or AIO or URING devices found 00:17:49.846 11:06:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:49.846 11:06:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:02.056 Initializing NVMe Controllers 00:18:02.056 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.056 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:02.056 Initialization complete. Launching workers. 00:18:02.056 ======================================================== 00:18:02.056 Latency(us) 00:18:02.056 Device Information : IOPS MiB/s Average min max 00:18:02.056 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 972.59 121.57 1027.79 323.67 8640.04 00:18:02.056 ======================================================== 00:18:02.056 Total : 972.59 121.57 1027.79 323.67 8640.04 00:18:02.056 00:18:02.056 11:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:02.056 11:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:02.056 11:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:02.056 Initializing NVMe Controllers 00:18:02.056 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.056 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:02.056 WARNING: Some requested NVMe devices were skipped 00:18:02.056 No valid NVMe controllers or AIO or URING devices found 00:18:02.056 11:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:02.056 11:07:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:12.037 Initializing NVMe Controllers 00:18:12.037 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:12.037 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:12.037 Initialization complete. Launching workers. 00:18:12.037 ======================================================== 00:18:12.037 Latency(us) 00:18:12.037 Device Information : IOPS MiB/s Average min max 00:18:12.037 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1302.33 162.79 24571.82 5307.33 71803.71 00:18:12.037 ======================================================== 00:18:12.037 Total : 1302.33 162.79 24571.82 5307.33 71803.71 00:18:12.037 00:18:12.037 11:07:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:12.037 11:07:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:12.037 11:07:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:12.037 Initializing NVMe Controllers 00:18:12.037 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:12.037 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:12.037 WARNING: Some requested NVMe devices were skipped 00:18:12.037 No valid NVMe controllers or AIO or URING devices found 00:18:12.037 11:07:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:12.037 11:07:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:22.022 Initializing NVMe Controllers 00:18:22.022 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:22.022 Controller IO queue size 128, less than required. 00:18:22.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:22.022 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:22.022 Initialization complete. Launching workers. 00:18:22.022 ======================================================== 00:18:22.022 Latency(us) 00:18:22.022 Device Information : IOPS MiB/s Average min max 00:18:22.022 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4046.45 505.81 31677.32 7403.80 64355.11 00:18:22.022 ======================================================== 00:18:22.022 Total : 4046.45 505.81 31677.32 7403.80 64355.11 00:18:22.022 00:18:22.022 11:07:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.022 11:07:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a7beda7a-ec60-4a5e-aa6f-7c3b2889acd3 00:18:22.281 11:07:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:22.540 11:07:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e6e363ab-ae60-4553-ab7c-a3595d604bae 00:18:22.799 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:23.057 rmmod nvme_tcp 00:18:23.057 rmmod nvme_fabrics 00:18:23.057 rmmod nvme_keyring 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 89111 ']' 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 89111 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 89111 ']' 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 89111 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89111 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:23.057 killing process with pid 89111 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89111' 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 89111 00:18:23.057 11:07:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 89111 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:24.433 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:24.691 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:24.691 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.691 11:07:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:24.691 ************************************ 00:18:24.691 END TEST nvmf_perf 00:18:24.691 ************************************ 00:18:24.691 00:18:24.691 real 0m51.661s 00:18:24.691 user 3m14.410s 00:18:24.691 sys 0m12.009s 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.691 ************************************ 00:18:24.691 START TEST nvmf_fio_host 00:18:24.691 ************************************ 00:18:24.691 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:24.951 * Looking for test storage... 00:18:24.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.951 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:24.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.952 --rc genhtml_branch_coverage=1 00:18:24.952 --rc genhtml_function_coverage=1 00:18:24.952 --rc genhtml_legend=1 00:18:24.952 --rc geninfo_all_blocks=1 00:18:24.952 --rc geninfo_unexecuted_blocks=1 00:18:24.952 00:18:24.952 ' 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:24.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.952 --rc genhtml_branch_coverage=1 00:18:24.952 --rc genhtml_function_coverage=1 00:18:24.952 --rc genhtml_legend=1 00:18:24.952 --rc geninfo_all_blocks=1 00:18:24.952 --rc geninfo_unexecuted_blocks=1 00:18:24.952 00:18:24.952 ' 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:24.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.952 --rc genhtml_branch_coverage=1 00:18:24.952 --rc genhtml_function_coverage=1 00:18:24.952 --rc genhtml_legend=1 00:18:24.952 --rc geninfo_all_blocks=1 00:18:24.952 --rc geninfo_unexecuted_blocks=1 00:18:24.952 00:18:24.952 ' 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:24.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.952 --rc genhtml_branch_coverage=1 00:18:24.952 --rc genhtml_function_coverage=1 00:18:24.952 --rc genhtml_legend=1 00:18:24.952 --rc geninfo_all_blocks=1 00:18:24.952 --rc geninfo_unexecuted_blocks=1 00:18:24.952 00:18:24.952 ' 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.952 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.952 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:24.953 Cannot find device "nvmf_init_br" 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:24.953 Cannot find device "nvmf_init_br2" 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:24.953 Cannot find device "nvmf_tgt_br" 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.953 Cannot find device "nvmf_tgt_br2" 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:24.953 Cannot find device "nvmf_init_br" 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:24.953 Cannot find device "nvmf_init_br2" 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:24.953 Cannot find device "nvmf_tgt_br" 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:24.953 Cannot find device "nvmf_tgt_br2" 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:24.953 Cannot find device "nvmf_br" 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:24.953 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:25.212 Cannot find device "nvmf_init_if" 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:25.212 Cannot find device "nvmf_init_if2" 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:25.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:25.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:18:25.212 00:18:25.212 --- 10.0.0.3 ping statistics --- 00:18:25.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.212 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:25.212 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:25.212 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.115 ms 00:18:25.212 00:18:25.212 --- 10.0.0.4 ping statistics --- 00:18:25.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.212 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:25.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:25.212 00:18:25.212 --- 10.0.0.1 ping statistics --- 00:18:25.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.212 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:25.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:18:25.212 00:18:25.212 --- 10.0.0.2 ping statistics --- 00:18:25.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.212 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:25.212 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89988 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89988 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 89988 ']' 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:25.472 11:07:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.472 [2024-10-29 11:07:30.791245] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:18:25.472 [2024-10-29 11:07:30.791327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.472 [2024-10-29 11:07:30.945690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:25.731 [2024-10-29 11:07:30.970720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.731 [2024-10-29 11:07:30.970794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.731 [2024-10-29 11:07:30.970808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.731 [2024-10-29 11:07:30.970820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.731 [2024-10-29 11:07:30.970834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.731 [2024-10-29 11:07:30.971795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.731 [2024-10-29 11:07:30.971960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.731 [2024-10-29 11:07:30.975476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:25.731 [2024-10-29 11:07:30.975499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.731 [2024-10-29 11:07:31.009149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.731 11:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:25.731 11:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:18:25.731 11:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:25.990 [2024-10-29 11:07:31.348789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.990 11:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:25.990 11:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:25.990 11:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.990 11:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:26.249 Malloc1 00:18:26.249 11:07:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:26.815 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:26.815 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:27.384 [2024-10-29 11:07:32.600172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:27.384 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:27.642 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:27.643 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:27.643 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:18:27.643 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:27.643 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:27.643 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:27.643 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:27.643 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:27.643 11:07:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:27.643 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:27.643 fio-3.35 00:18:27.643 Starting 1 thread 00:18:30.174 00:18:30.174 test: (groupid=0, jobs=1): err= 0: pid=90062: Tue Oct 29 11:07:35 2024 00:18:30.174 read: IOPS=8714, BW=34.0MiB/s (35.7MB/s)(68.4MiB/2008msec) 00:18:30.174 slat (nsec): min=1807, max=358741, avg=2462.06, stdev=3570.17 00:18:30.174 clat (usec): min=2477, max=15795, avg=7653.33, stdev=781.27 00:18:30.174 lat (usec): min=2520, max=15798, avg=7655.80, stdev=781.18 00:18:30.174 clat percentiles (usec): 00:18:30.174 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:18:30.174 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7701], 00:18:30.174 | 70.00th=[ 7963], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[ 8979], 00:18:30.174 | 99.00th=[ 9634], 99.50th=[10028], 99.90th=[13566], 99.95th=[14222], 00:18:30.174 | 99.99th=[15664] 00:18:30.174 bw ( KiB/s): min=31808, max=37696, per=100.00%, avg=34872.00, stdev=2611.96, samples=4 00:18:30.174 iops : min= 7952, max= 9424, avg=8718.00, stdev=652.99, samples=4 00:18:30.174 write: IOPS=8709, BW=34.0MiB/s (35.7MB/s)(68.3MiB/2008msec); 0 zone resets 00:18:30.174 slat (nsec): min=1923, max=226521, avg=2562.02, stdev=2437.57 00:18:30.174 clat (usec): min=2341, max=15209, avg=6973.75, stdev=722.26 00:18:30.174 lat (usec): min=2354, max=15211, avg=6976.32, stdev=722.25 00:18:30.174 clat percentiles (usec): 00:18:30.174 | 1.00th=[ 5800], 5.00th=[ 6063], 10.00th=[ 6194], 20.00th=[ 6390], 00:18:30.174 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 7046], 00:18:30.174 | 70.00th=[ 7242], 80.00th=[ 7504], 90.00th=[ 7898], 95.00th=[ 8160], 00:18:30.174 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[13304], 99.95th=[14353], 00:18:30.174 | 99.99th=[15139] 00:18:30.174 bw ( KiB/s): min=31616, max=36800, per=100.00%, avg=34850.00, stdev=2334.72, samples=4 00:18:30.174 iops : min= 7904, max= 9200, avg=8712.50, stdev=583.68, samples=4 00:18:30.174 lat (msec) : 4=0.08%, 10=99.52%, 20=0.40% 00:18:30.174 cpu : usr=71.20%, sys=22.22%, ctx=14, majf=0, minf=7 00:18:30.174 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:30.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:30.174 issued rwts: total=17499,17489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.174 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:30.174 00:18:30.174 Run status group 0 (all jobs): 00:18:30.174 READ: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=68.4MiB (71.7MB), run=2008-2008msec 00:18:30.174 WRITE: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=68.3MiB (71.6MB), run=2008-2008msec 00:18:30.174 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:30.174 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:30.174 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:30.174 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:30.174 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:30.174 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:30.174 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:18:30.174 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:30.174 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:30.174 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:30.175 11:07:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:30.175 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:30.175 fio-3.35 00:18:30.175 Starting 1 thread 00:18:32.770 00:18:32.770 test: (groupid=0, jobs=1): err= 0: pid=90108: Tue Oct 29 11:07:37 2024 00:18:32.770 read: IOPS=8504, BW=133MiB/s (139MB/s)(267MiB/2010msec) 00:18:32.770 slat (usec): min=2, max=124, avg= 3.60, stdev= 2.37 00:18:32.770 clat (usec): min=2352, max=16928, avg=8264.70, stdev=2538.24 00:18:32.770 lat (usec): min=2356, max=16931, avg=8268.30, stdev=2538.28 00:18:32.770 clat percentiles (usec): 00:18:32.770 | 1.00th=[ 3884], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 5997], 00:18:32.770 | 30.00th=[ 6652], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8717], 00:18:32.770 | 70.00th=[ 9503], 80.00th=[10421], 90.00th=[11863], 95.00th=[13042], 00:18:32.770 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15926], 99.95th=[16319], 00:18:32.770 | 99.99th=[16581] 00:18:32.770 bw ( KiB/s): min=61472, max=82944, per=52.69%, avg=71704.00, stdev=9879.85, samples=4 00:18:32.770 iops : min= 3842, max= 5184, avg=4481.50, stdev=617.49, samples=4 00:18:32.770 write: IOPS=5108, BW=79.8MiB/s (83.7MB/s)(146MiB/1830msec); 0 zone resets 00:18:32.770 slat (usec): min=32, max=351, avg=36.99, stdev= 9.56 00:18:32.770 clat (usec): min=2939, max=21646, avg=11630.37, stdev=2091.24 00:18:32.770 lat (usec): min=2972, max=21682, avg=11667.36, stdev=2090.95 00:18:32.770 clat percentiles (usec): 00:18:32.770 | 1.00th=[ 7832], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9896], 00:18:32.770 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:18:32.770 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14484], 95.00th=[15401], 00:18:32.770 | 99.00th=[17957], 99.50th=[19006], 99.90th=[19792], 99.95th=[21103], 00:18:32.770 | 99.99th=[21627] 00:18:32.770 bw ( KiB/s): min=64928, max=85472, per=90.82%, avg=74232.00, stdev=9611.42, samples=4 00:18:32.770 iops : min= 4058, max= 5342, avg=4639.50, stdev=600.71, samples=4 00:18:32.770 lat (msec) : 4=0.84%, 10=55.60%, 20=43.54%, 50=0.03% 00:18:32.770 cpu : usr=83.18%, sys=12.89%, ctx=5, majf=0, minf=3 00:18:32.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:32.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.770 issued rwts: total=17095,9348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.770 00:18:32.770 Run status group 0 (all jobs): 00:18:32.770 READ: bw=133MiB/s (139MB/s), 133MiB/s-133MiB/s (139MB/s-139MB/s), io=267MiB (280MB), run=2010-2010msec 00:18:32.770 WRITE: bw=79.8MiB/s (83.7MB/s), 79.8MiB/s-79.8MiB/s (83.7MB/s-83.7MB/s), io=146MiB (153MB), run=1830-1830msec 00:18:32.770 11:07:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:32.770 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:18:33.337 Nvme0n1 00:18:33.337 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:33.596 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=eb26a70f-12cc-4da3-b1aa-cdc4b8b48be7 00:18:33.596 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb eb26a70f-12cc-4da3-b1aa-cdc4b8b48be7 00:18:33.596 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=eb26a70f-12cc-4da3-b1aa-cdc4b8b48be7 00:18:33.596 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:18:33.596 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:18:33.596 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:18:33.596 11:07:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:33.854 11:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:18:33.854 { 00:18:33.854 "uuid": "eb26a70f-12cc-4da3-b1aa-cdc4b8b48be7", 00:18:33.854 "name": "lvs_0", 00:18:33.854 "base_bdev": "Nvme0n1", 00:18:33.854 "total_data_clusters": 4, 00:18:33.855 "free_clusters": 4, 00:18:33.855 "block_size": 4096, 00:18:33.855 "cluster_size": 1073741824 00:18:33.855 } 00:18:33.855 ]' 00:18:33.855 11:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="eb26a70f-12cc-4da3-b1aa-cdc4b8b48be7") .free_clusters' 00:18:33.855 11:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=4 00:18:33.855 11:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="eb26a70f-12cc-4da3-b1aa-cdc4b8b48be7") .cluster_size' 00:18:33.855 11:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=1073741824 00:18:33.855 11:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=4096 00:18:33.855 4096 00:18:33.855 11:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 4096 00:18:33.855 11:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:34.113 b868e8fe-d2db-417c-a0bd-a7de5a573f84 00:18:34.113 11:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:34.684 11:07:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:34.942 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:35.200 11:07:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:35.200 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:35.200 fio-3.35 00:18:35.200 Starting 1 thread 00:18:37.730 00:18:37.730 test: (groupid=0, jobs=1): err= 0: pid=90218: Tue Oct 29 11:07:42 2024 00:18:37.730 read: IOPS=6242, BW=24.4MiB/s (25.6MB/s)(49.0MiB/2008msec) 00:18:37.730 slat (usec): min=2, max=326, avg= 2.72, stdev= 3.96 00:18:37.730 clat (usec): min=3050, max=19265, avg=10719.56, stdev=894.53 00:18:37.730 lat (usec): min=3060, max=19267, avg=10722.28, stdev=894.24 00:18:37.730 clat percentiles (usec): 00:18:37.730 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:18:37.730 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:18:37.730 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:18:37.730 | 99.00th=[12649], 99.50th=[13173], 99.90th=[18220], 99.95th=[18482], 00:18:37.730 | 99.99th=[18744] 00:18:37.730 bw ( KiB/s): min=24119, max=25400, per=99.78%, avg=24913.75, stdev=556.71, samples=4 00:18:37.730 iops : min= 6029, max= 6350, avg=6228.25, stdev=139.53, samples=4 00:18:37.730 write: IOPS=6235, BW=24.4MiB/s (25.5MB/s)(48.9MiB/2008msec); 0 zone resets 00:18:37.730 slat (usec): min=2, max=258, avg= 2.81, stdev= 2.82 00:18:37.730 clat (usec): min=2497, max=17615, avg=9715.44, stdev=818.37 00:18:37.730 lat (usec): min=2510, max=17618, avg=9718.25, stdev=818.21 00:18:37.730 clat percentiles (usec): 00:18:37.730 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:18:37.730 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:18:37.730 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:18:37.730 | 99.00th=[11469], 99.50th=[11863], 99.90th=[15401], 99.95th=[17433], 00:18:37.730 | 99.99th=[17695] 00:18:37.730 bw ( KiB/s): min=24768, max=25109, per=99.90%, avg=24917.25, stdev=150.71, samples=4 00:18:37.730 iops : min= 6192, max= 6277, avg=6229.25, stdev=37.57, samples=4 00:18:37.730 lat (msec) : 4=0.06%, 10=41.58%, 20=58.36% 00:18:37.730 cpu : usr=72.85%, sys=21.52%, ctx=48, majf=0, minf=7 00:18:37.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:37.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.730 issued rwts: total=12534,12521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.730 00:18:37.730 Run status group 0 (all jobs): 00:18:37.730 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=49.0MiB (51.3MB), run=2008-2008msec 00:18:37.730 WRITE: bw=24.4MiB/s (25.5MB/s), 24.4MiB/s-24.4MiB/s (25.5MB/s-25.5MB/s), io=48.9MiB (51.3MB), run=2008-2008msec 00:18:37.730 11:07:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:37.730 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:18:38.297 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=9aba833c-b20b-4253-9e5c-01305b389c68 00:18:38.297 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 9aba833c-b20b-4253-9e5c-01305b389c68 00:18:38.297 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local lvs_uuid=9aba833c-b20b-4253-9e5c-01305b389c68 00:18:38.298 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local lvs_info 00:18:38.298 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local fc 00:18:38.298 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local cs 00:18:38.298 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:38.298 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # lvs_info='[ 00:18:38.298 { 00:18:38.298 "uuid": "eb26a70f-12cc-4da3-b1aa-cdc4b8b48be7", 00:18:38.298 "name": "lvs_0", 00:18:38.298 "base_bdev": "Nvme0n1", 00:18:38.298 "total_data_clusters": 4, 00:18:38.298 "free_clusters": 0, 00:18:38.298 "block_size": 4096, 00:18:38.298 "cluster_size": 1073741824 00:18:38.298 }, 00:18:38.298 { 00:18:38.298 "uuid": "9aba833c-b20b-4253-9e5c-01305b389c68", 00:18:38.298 "name": "lvs_n_0", 00:18:38.298 "base_bdev": "b868e8fe-d2db-417c-a0bd-a7de5a573f84", 00:18:38.298 "total_data_clusters": 1022, 00:18:38.298 "free_clusters": 1022, 00:18:38.298 "block_size": 4096, 00:18:38.298 "cluster_size": 4194304 00:18:38.298 } 00:18:38.298 ]' 00:18:38.298 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # jq '.[] | select(.uuid=="9aba833c-b20b-4253-9e5c-01305b389c68") .free_clusters' 00:18:38.298 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # fc=1022 00:18:38.298 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # jq '.[] | select(.uuid=="9aba833c-b20b-4253-9e5c-01305b389c68") .cluster_size' 00:18:38.557 4088 00:18:38.557 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # cs=4194304 00:18:38.557 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1375 -- # free_mb=4088 00:18:38.557 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1376 -- # echo 4088 00:18:38.557 11:07:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:18:38.816 9eea4a7e-1875-4bd8-8e7d-0d901d96cd67 00:18:38.816 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:18:39.074 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:18:39.333 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:39.591 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:39.591 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:39.591 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:39.591 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:39.591 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:39.592 11:07:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:39.855 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:39.855 fio-3.35 00:18:39.855 Starting 1 thread 00:18:42.399 00:18:42.399 test: (groupid=0, jobs=1): err= 0: pid=90302: Tue Oct 29 11:07:47 2024 00:18:42.399 read: IOPS=5510, BW=21.5MiB/s (22.6MB/s)(43.3MiB/2010msec) 00:18:42.399 slat (nsec): min=1973, max=310467, avg=2700.06, stdev=3932.70 00:18:42.399 clat (usec): min=3225, max=20509, avg=12186.23, stdev=1018.70 00:18:42.399 lat (usec): min=3233, max=20512, avg=12188.93, stdev=1018.36 00:18:42.399 clat percentiles (usec): 00:18:42.399 | 1.00th=[10028], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:18:42.399 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:18:42.399 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:18:42.399 | 99.00th=[14484], 99.50th=[15008], 99.90th=[18744], 99.95th=[19006], 00:18:42.399 | 99.99th=[20579] 00:18:42.399 bw ( KiB/s): min=21048, max=22408, per=99.86%, avg=22010.00, stdev=643.67, samples=4 00:18:42.399 iops : min= 5262, max= 5602, avg=5502.50, stdev=160.92, samples=4 00:18:42.399 write: IOPS=5473, BW=21.4MiB/s (22.4MB/s)(43.0MiB/2010msec); 0 zone resets 00:18:42.399 slat (usec): min=2, max=193, avg= 2.81, stdev= 2.61 00:18:42.399 clat (usec): min=2139, max=18898, avg=11003.22, stdev=950.83 00:18:42.399 lat (usec): min=2151, max=18901, avg=11006.04, stdev=950.69 00:18:42.399 clat percentiles (usec): 00:18:42.399 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:18:42.399 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:18:42.399 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:18:42.399 | 99.00th=[13042], 99.50th=[13435], 99.90th=[17171], 99.95th=[18482], 00:18:42.399 | 99.99th=[18744] 00:18:42.399 bw ( KiB/s): min=21464, max=22376, per=99.99%, avg=21890.00, stdev=397.18, samples=4 00:18:42.399 iops : min= 5366, max= 5594, avg=5472.50, stdev=99.30, samples=4 00:18:42.399 lat (msec) : 4=0.06%, 10=6.25%, 20=93.67%, 50=0.02% 00:18:42.399 cpu : usr=74.66%, sys=20.76%, ctx=4, majf=0, minf=7 00:18:42.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:42.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:42.399 issued rwts: total=11076,11001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.399 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:42.399 00:18:42.399 Run status group 0 (all jobs): 00:18:42.399 READ: bw=21.5MiB/s (22.6MB/s), 21.5MiB/s-21.5MiB/s (22.6MB/s-22.6MB/s), io=43.3MiB (45.4MB), run=2010-2010msec 00:18:42.399 WRITE: bw=21.4MiB/s (22.4MB/s), 21.4MiB/s-21.4MiB/s (22.4MB/s-22.4MB/s), io=43.0MiB (45.1MB), run=2010-2010msec 00:18:42.399 11:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:42.399 11:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:18:42.399 11:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:18:42.657 11:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:42.915 11:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:18:43.173 11:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:43.431 11:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.997 rmmod nvme_tcp 00:18:43.997 rmmod nvme_fabrics 00:18:43.997 rmmod nvme_keyring 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 89988 ']' 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 89988 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 89988 ']' 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 89988 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89988 00:18:43.997 killing process with pid 89988 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89988' 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 89988 00:18:43.997 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 89988 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:44.255 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:44.515 ************************************ 00:18:44.515 END TEST nvmf_fio_host 00:18:44.515 ************************************ 00:18:44.515 00:18:44.515 real 0m19.771s 00:18:44.515 user 1m26.767s 00:18:44.515 sys 0m4.408s 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.515 ************************************ 00:18:44.515 START TEST nvmf_failover 00:18:44.515 ************************************ 00:18:44.515 11:07:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:44.777 * Looking for test storage... 00:18:44.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:44.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.777 --rc genhtml_branch_coverage=1 00:18:44.777 --rc genhtml_function_coverage=1 00:18:44.777 --rc genhtml_legend=1 00:18:44.777 --rc geninfo_all_blocks=1 00:18:44.777 --rc geninfo_unexecuted_blocks=1 00:18:44.777 00:18:44.777 ' 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:44.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.777 --rc genhtml_branch_coverage=1 00:18:44.777 --rc genhtml_function_coverage=1 00:18:44.777 --rc genhtml_legend=1 00:18:44.777 --rc geninfo_all_blocks=1 00:18:44.777 --rc geninfo_unexecuted_blocks=1 00:18:44.777 00:18:44.777 ' 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:44.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.777 --rc genhtml_branch_coverage=1 00:18:44.777 --rc genhtml_function_coverage=1 00:18:44.777 --rc genhtml_legend=1 00:18:44.777 --rc geninfo_all_blocks=1 00:18:44.777 --rc geninfo_unexecuted_blocks=1 00:18:44.777 00:18:44.777 ' 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:44.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.777 --rc genhtml_branch_coverage=1 00:18:44.777 --rc genhtml_function_coverage=1 00:18:44.777 --rc genhtml_legend=1 00:18:44.777 --rc geninfo_all_blocks=1 00:18:44.777 --rc geninfo_unexecuted_blocks=1 00:18:44.777 00:18:44.777 ' 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:44.777 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.778 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:44.778 Cannot find device "nvmf_init_br" 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:44.778 Cannot find device "nvmf_init_br2" 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:44.778 Cannot find device "nvmf_tgt_br" 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.778 Cannot find device "nvmf_tgt_br2" 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:44.778 Cannot find device "nvmf_init_br" 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:44.778 Cannot find device "nvmf_init_br2" 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:44.778 Cannot find device "nvmf_tgt_br" 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:44.778 Cannot find device "nvmf_tgt_br2" 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:44.778 Cannot find device "nvmf_br" 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:18:44.778 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:45.038 Cannot find device "nvmf_init_if" 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:45.038 Cannot find device "nvmf_init_if2" 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:45.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:45.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.038 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:45.298 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.298 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:18:45.298 00:18:45.298 --- 10.0.0.3 ping statistics --- 00:18:45.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.298 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:45.298 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:45.298 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:18:45.298 00:18:45.298 --- 10.0.0.4 ping statistics --- 00:18:45.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.298 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:45.298 00:18:45.298 --- 10.0.0.1 ping statistics --- 00:18:45.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.298 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:45.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:18:45.298 00:18:45.298 --- 10.0.0.2 ping statistics --- 00:18:45.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.298 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:45.298 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=90592 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 90592 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 90592 ']' 00:18:45.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:45.299 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:45.299 [2024-10-29 11:07:50.653205] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:18:45.299 [2024-10-29 11:07:50.653289] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.558 [2024-10-29 11:07:50.802300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:45.558 [2024-10-29 11:07:50.826331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.558 [2024-10-29 11:07:50.826423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.558 [2024-10-29 11:07:50.826440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.558 [2024-10-29 11:07:50.826451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.558 [2024-10-29 11:07:50.826460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.558 [2024-10-29 11:07:50.827368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.558 [2024-10-29 11:07:50.828112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.558 [2024-10-29 11:07:50.828153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.558 [2024-10-29 11:07:50.865284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.558 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:45.558 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:18:45.558 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.558 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:45.558 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:45.558 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.558 11:07:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:45.817 [2024-10-29 11:07:51.185120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.817 11:07:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:46.076 Malloc0 00:18:46.076 11:07:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:46.335 11:07:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:46.594 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:46.853 [2024-10-29 11:07:52.252630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:46.853 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:47.112 [2024-10-29 11:07:52.496816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:47.112 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:47.384 [2024-10-29 11:07:52.741000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:47.384 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90642 00:18:47.384 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:47.384 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:47.384 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90642 /var/tmp/bdevperf.sock 00:18:47.384 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 90642 ']' 00:18:47.384 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.384 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.384 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.384 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.384 11:07:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:47.642 11:07:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:47.642 11:07:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:18:47.642 11:07:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:47.900 NVMe0n1 00:18:47.900 11:07:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:48.467 00:18:48.467 11:07:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90658 00:18:48.467 11:07:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:48.467 11:07:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:49.404 11:07:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:49.663 11:07:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:52.947 11:07:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:52.947 00:18:52.947 11:07:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:53.204 11:07:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:56.508 11:08:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:56.508 [2024-10-29 11:08:01.964283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:56.508 11:08:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:57.894 11:08:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:57.894 11:08:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 90658 00:19:04.468 { 00:19:04.468 "results": [ 00:19:04.468 { 00:19:04.468 "job": "NVMe0n1", 00:19:04.468 "core_mask": "0x1", 00:19:04.468 "workload": "verify", 00:19:04.468 "status": "finished", 00:19:04.468 "verify_range": { 00:19:04.468 "start": 0, 00:19:04.468 "length": 16384 00:19:04.468 }, 00:19:04.468 "queue_depth": 128, 00:19:04.468 "io_size": 4096, 00:19:04.468 "runtime": 15.00831, 00:19:04.468 "iops": 9544.245821148417, 00:19:04.468 "mibps": 37.282210238861005, 00:19:04.468 "io_failed": 3453, 00:19:04.468 "io_timeout": 0, 00:19:04.468 "avg_latency_us": 13065.01781693248, 00:19:04.468 "min_latency_us": 558.5454545454545, 00:19:04.468 "max_latency_us": 18826.705454545456 00:19:04.468 } 00:19:04.468 ], 00:19:04.468 "core_count": 1 00:19:04.468 } 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 90642 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 90642 ']' 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 90642 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90642 00:19:04.468 killing process with pid 90642 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90642' 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 90642 00:19:04.468 11:08:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 90642 00:19:04.468 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:04.468 [2024-10-29 11:07:52.812729] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:19:04.468 [2024-10-29 11:07:52.812878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90642 ] 00:19:04.468 [2024-10-29 11:07:52.956633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.468 [2024-10-29 11:07:52.978664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.468 [2024-10-29 11:07:53.008683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:04.468 Running I/O for 15 seconds... 00:19:04.468 6804.00 IOPS, 26.58 MiB/s [2024-10-29T11:08:09.965Z] [2024-10-29 11:07:55.019620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.019686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.019732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.019748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.019763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.019777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.019792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.019821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.019835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.019848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.019862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.019875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.019890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.019903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.019917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.019930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.019944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.019957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.019971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.019984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.019998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.020984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.020997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.021011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.021025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.021039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.021052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.021067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.021080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.021095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.021108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.021123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.021136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.021150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.468 [2024-10-29 11:07:55.021163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.468 [2024-10-29 11:07:55.021178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.021985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.021998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.022957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.022973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.022987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.469 [2024-10-29 11:07:55.023217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.469 [2024-10-29 11:07:55.023552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.469 [2024-10-29 11:07:55.023567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:55.023581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:55.023603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:55.023618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:55.023633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:55.023647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:55.023662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:55.023676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:55.023691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae8700 is same with the state(6) to be set 00:19:04.470 [2024-10-29 11:07:55.023706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.470 [2024-10-29 11:07:55.023717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.470 [2024-10-29 11:07:55.023728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61904 len:8 PRP1 0x0 PRP2 0x0 00:19:04.470 [2024-10-29 11:07:55.023744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:55.023799] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:04.470 [2024-10-29 11:07:55.023858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.470 [2024-10-29 11:07:55.023881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:55.023897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.470 [2024-10-29 11:07:55.023910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:55.023924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.470 [2024-10-29 11:07:55.023943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:55.023958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.470 [2024-10-29 11:07:55.023971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:55.023985] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:04.470 [2024-10-29 11:07:55.027724] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:04.470 [2024-10-29 11:07:55.027761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac6a80 (9): Bad file descriptor 00:19:04.470 [2024-10-29 11:07:55.067964] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:04.470 7934.00 IOPS, 30.99 MiB/s [2024-10-29T11:08:09.967Z] 8660.00 IOPS, 33.83 MiB/s [2024-10-29T11:08:09.967Z] 9033.00 IOPS, 35.29 MiB/s [2024-10-29T11:08:09.967Z] [2024-10-29 11:07:58.669472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.669533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.669615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.669645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.669673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.669700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.669729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.669756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.669785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.669813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.669840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.669868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.669896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.669923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.669959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.669975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.669989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.670751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.670976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.470 [2024-10-29 11:07:58.670990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.470 [2024-10-29 11:07:58.671320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.470 [2024-10-29 11:07:58.671333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.671361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.671399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.671447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.671475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.671969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.671983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.671996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.471 [2024-10-29 11:07:58.672486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.672980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.471 [2024-10-29 11:07:58.672993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc26ea0 is same with the state(6) to be set 00:19:04.471 [2024-10-29 11:07:58.673024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98824 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99280 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99288 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99296 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99304 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99312 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99320 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99328 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99336 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99344 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99352 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99360 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99368 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99376 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99384 len:8 PRP1 0x0 PRP2 0x0 00:19:04.471 [2024-10-29 11:07:58.673747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.471 [2024-10-29 11:07:58.673760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.471 [2024-10-29 11:07:58.673770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.471 [2024-10-29 11:07:58.673780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99392 len:8 PRP1 0x0 PRP2 0x0 00:19:04.472 [2024-10-29 11:07:58.673793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:07:58.673806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.472 [2024-10-29 11:07:58.673816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.472 [2024-10-29 11:07:58.673826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99400 len:8 PRP1 0x0 PRP2 0x0 00:19:04.472 [2024-10-29 11:07:58.673839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:07:58.673887] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:04.472 [2024-10-29 11:07:58.673950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.472 [2024-10-29 11:07:58.673973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:07:58.673988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.472 [2024-10-29 11:07:58.674002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:07:58.674015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.472 [2024-10-29 11:07:58.674031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:07:58.674046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.472 [2024-10-29 11:07:58.674059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:07:58.674073] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:04.472 [2024-10-29 11:07:58.674124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac6a80 (9): Bad file descriptor 00:19:04.472 [2024-10-29 11:07:58.677867] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:04.472 [2024-10-29 11:07:58.712405] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:19:04.472 9130.00 IOPS, 35.66 MiB/s [2024-10-29T11:08:09.969Z] 9281.67 IOPS, 36.26 MiB/s [2024-10-29T11:08:09.969Z] 9382.00 IOPS, 36.65 MiB/s [2024-10-29T11:08:09.969Z] 9413.50 IOPS, 36.77 MiB/s [2024-10-29T11:08:09.969Z] 9462.67 IOPS, 36.96 MiB/s [2024-10-29T11:08:09.969Z] [2024-10-29 11:08:03.259888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.260375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.260530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.260641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.260730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.260822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.260898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.260977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.261043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.261119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.261184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.261247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.261310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.261440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.261531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.261633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.261720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.261790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.261898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.261987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.262051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.262124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.262204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.262276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.262349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.262464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.262536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.262644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.262717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.262808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.262889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.262964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.263030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.263104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.263170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.263248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.263310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.263392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.263490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.263583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.263669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.263748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.263815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.263890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.263970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.264045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.264112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.264177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.264255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.264333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.264464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.264577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.264653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.264723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.264807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.264888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.264980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.265060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.265128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.265207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.265274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.265348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.265415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.265511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.265596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.265685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.265752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.265826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.265893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.265968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.266036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.266118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.266198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.266276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.266353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.266472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.266543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.266632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.266701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.266791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.266867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.266933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.266998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.267078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.267145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.267220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.267295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.267370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.267481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.267581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.267651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.267731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.267813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.267891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.267966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.268045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.268112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.268185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.268250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.268324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.268409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.268530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.268617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.472 [2024-10-29 11:08:03.268694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.268785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.268849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.268925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.269000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.269066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.269139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.269220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.269285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.269350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.269446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.269540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.269626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.269694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.269775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.269854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.269930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.270004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.270029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.270046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.270060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.270076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.270089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.270104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.270118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.270133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.472 [2024-10-29 11:08:03.270147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.472 [2024-10-29 11:08:03.270162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.270175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.270694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.270732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.270777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.270806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.270835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.270864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.270892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.270921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.270949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.270978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.270993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.271007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.271035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.271063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.271092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.271145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.271175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.271205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:04.473 [2024-10-29 11:08:03.271235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.271264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.271294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.271323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.271353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.271382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.271426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:04.473 [2024-10-29 11:08:03.271471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaea820 is same with the state(6) to be set 00:19:04.473 [2024-10-29 11:08:03.271505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.271515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.271526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74424 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.271547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.271573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.271584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74968 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.271597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.271620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.271631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74976 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.271643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.271666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.271676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74984 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.271690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.271713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.271723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74992 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.271736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.271759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.271769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75000 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.271782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.271805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.271816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75008 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.271828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.271852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.271862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75016 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.271875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.271905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.271917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75024 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.271930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.271961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.271972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75032 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.271985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.271999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75040 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75048 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75056 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75064 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74432 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74440 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74448 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74456 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74464 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74472 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74480 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:04.473 [2024-10-29 11:08:03.272616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:04.473 [2024-10-29 11:08:03.272628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74488 len:8 PRP1 0x0 PRP2 0x0 00:19:04.473 [2024-10-29 11:08:03.272642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.473 [2024-10-29 11:08:03.272702] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:04.473 [2024-10-29 11:08:03.272783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.473 [2024-10-29 11:08:03.272808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.474 [2024-10-29 11:08:03.272838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.474 [2024-10-29 11:08:03.272853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.474 [2024-10-29 11:08:03.272868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.474 [2024-10-29 11:08:03.272893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.474 [2024-10-29 11:08:03.272909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:04.474 [2024-10-29 11:08:03.272923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:04.474 [2024-10-29 11:08:03.272938] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:04.474 [2024-10-29 11:08:03.272976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xac6a80 (9): Bad file descriptor 00:19:04.474 [2024-10-29 11:08:03.276764] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:04.474 [2024-10-29 11:08:03.310255] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:19:04.474 9446.10 IOPS, 36.90 MiB/s [2024-10-29T11:08:09.971Z] 9468.45 IOPS, 36.99 MiB/s [2024-10-29T11:08:09.971Z] 9486.75 IOPS, 37.06 MiB/s [2024-10-29T11:08:09.971Z] 9505.31 IOPS, 37.13 MiB/s [2024-10-29T11:08:09.971Z] 9523.50 IOPS, 37.20 MiB/s [2024-10-29T11:08:09.971Z] 9543.53 IOPS, 37.28 MiB/s 00:19:04.474 Latency(us) 00:19:04.474 [2024-10-29T11:08:09.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.474 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:04.474 Verification LBA range: start 0x0 length 0x4000 00:19:04.474 NVMe0n1 : 15.01 9544.25 37.28 230.07 0.00 13065.02 558.55 18826.71 00:19:04.474 [2024-10-29T11:08:09.971Z] =================================================================================================================== 00:19:04.474 [2024-10-29T11:08:09.971Z] Total : 9544.25 37.28 230.07 0.00 13065.02 558.55 18826.71 00:19:04.474 Received shutdown signal, test time was about 15.000000 seconds 00:19:04.474 00:19:04.474 Latency(us) 00:19:04.474 [2024-10-29T11:08:09.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.474 [2024-10-29T11:08:09.971Z] =================================================================================================================== 00:19:04.474 [2024-10-29T11:08:09.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:04.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90831 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90831 /var/tmp/bdevperf.sock 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 90831 ']' 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:04.474 [2024-10-29 11:08:09.664736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:04.474 [2024-10-29 11:08:09.901025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:04.474 11:08:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:05.040 NVMe0n1 00:19:05.040 11:08:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:05.299 00:19:05.299 11:08:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:05.557 00:19:05.557 11:08:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:05.557 11:08:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:05.816 11:08:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:06.074 11:08:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:09.359 11:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:09.360 11:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:09.360 11:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:09.360 11:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90900 00:19:09.360 11:08:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 90900 00:19:10.737 { 00:19:10.737 "results": [ 00:19:10.737 { 00:19:10.737 "job": "NVMe0n1", 00:19:10.737 "core_mask": "0x1", 00:19:10.737 "workload": "verify", 00:19:10.737 "status": "finished", 00:19:10.737 "verify_range": { 00:19:10.737 "start": 0, 00:19:10.737 "length": 16384 00:19:10.737 }, 00:19:10.737 "queue_depth": 128, 00:19:10.737 "io_size": 4096, 00:19:10.737 "runtime": 1.005374, 00:19:10.737 "iops": 7420.124252268311, 00:19:10.737 "mibps": 28.984860360423088, 00:19:10.737 "io_failed": 0, 00:19:10.737 "io_timeout": 0, 00:19:10.737 "avg_latency_us": 17179.75431830368, 00:19:10.737 "min_latency_us": 781.9636363636364, 00:19:10.737 "max_latency_us": 16681.890909090907 00:19:10.737 } 00:19:10.737 ], 00:19:10.737 "core_count": 1 00:19:10.737 } 00:19:10.737 11:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:10.737 [2024-10-29 11:08:09.138548] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:19:10.737 [2024-10-29 11:08:09.139162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90831 ] 00:19:10.737 [2024-10-29 11:08:09.281884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.737 [2024-10-29 11:08:09.301935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.737 [2024-10-29 11:08:09.331029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:10.737 [2024-10-29 11:08:11.426629] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:10.737 [2024-10-29 11:08:11.427269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.737 [2024-10-29 11:08:11.427380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.737 [2024-10-29 11:08:11.427524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.737 [2024-10-29 11:08:11.427610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.737 [2024-10-29 11:08:11.427684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.737 [2024-10-29 11:08:11.427766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.737 [2024-10-29 11:08:11.427836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.737 [2024-10-29 11:08:11.427914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.737 [2024-10-29 11:08:11.427984] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:19:10.737 [2024-10-29 11:08:11.428107] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:19:10.737 [2024-10-29 11:08:11.428212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a4a80 (9): Bad file descriptor 00:19:10.737 [2024-10-29 11:08:11.434315] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:19:10.737 Running I/O for 1 seconds... 00:19:10.737 7324.00 IOPS, 28.61 MiB/s 00:19:10.737 Latency(us) 00:19:10.737 [2024-10-29T11:08:16.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.737 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:10.737 Verification LBA range: start 0x0 length 0x4000 00:19:10.737 NVMe0n1 : 1.01 7420.12 28.98 0.00 0.00 17179.75 781.96 16681.89 00:19:10.737 [2024-10-29T11:08:16.234Z] =================================================================================================================== 00:19:10.737 [2024-10-29T11:08:16.234Z] Total : 7420.12 28.98 0.00 0.00 17179.75 781.96 16681.89 00:19:10.737 11:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:10.737 11:08:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:10.737 11:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:10.996 11:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:10.996 11:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:11.254 11:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:11.513 11:08:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:14.799 11:08:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:14.799 11:08:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 90831 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 90831 ']' 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 90831 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90831 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:14.799 killing process with pid 90831 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90831' 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 90831 00:19:14.799 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 90831 00:19:15.057 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:15.057 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:15.316 rmmod nvme_tcp 00:19:15.316 rmmod nvme_fabrics 00:19:15.316 rmmod nvme_keyring 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:15.316 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 90592 ']' 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 90592 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 90592 ']' 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 90592 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90592 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:15.317 killing process with pid 90592 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90592' 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 90592 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 90592 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.317 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.576 11:08:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.576 11:08:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:15.576 11:08:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.576 11:08:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.576 11:08:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.576 11:08:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:15.576 00:19:15.576 real 0m31.109s 00:19:15.576 user 2m0.060s 00:19:15.576 sys 0m5.349s 00:19:15.576 11:08:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:15.576 ************************************ 00:19:15.576 END TEST nvmf_failover 00:19:15.576 11:08:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:15.576 ************************************ 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.836 ************************************ 00:19:15.836 START TEST nvmf_host_discovery 00:19:15.836 ************************************ 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:15.836 * Looking for test storage... 00:19:15.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:15.836 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:15.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.837 --rc genhtml_branch_coverage=1 00:19:15.837 --rc genhtml_function_coverage=1 00:19:15.837 --rc genhtml_legend=1 00:19:15.837 --rc geninfo_all_blocks=1 00:19:15.837 --rc geninfo_unexecuted_blocks=1 00:19:15.837 00:19:15.837 ' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:15.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.837 --rc genhtml_branch_coverage=1 00:19:15.837 --rc genhtml_function_coverage=1 00:19:15.837 --rc genhtml_legend=1 00:19:15.837 --rc geninfo_all_blocks=1 00:19:15.837 --rc geninfo_unexecuted_blocks=1 00:19:15.837 00:19:15.837 ' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:15.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.837 --rc genhtml_branch_coverage=1 00:19:15.837 --rc genhtml_function_coverage=1 00:19:15.837 --rc genhtml_legend=1 00:19:15.837 --rc geninfo_all_blocks=1 00:19:15.837 --rc geninfo_unexecuted_blocks=1 00:19:15.837 00:19:15.837 ' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:15.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.837 --rc genhtml_branch_coverage=1 00:19:15.837 --rc genhtml_function_coverage=1 00:19:15.837 --rc genhtml_legend=1 00:19:15.837 --rc geninfo_all_blocks=1 00:19:15.837 --rc geninfo_unexecuted_blocks=1 00:19:15.837 00:19:15.837 ' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.837 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.837 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.838 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:15.838 Cannot find device "nvmf_init_br" 00:19:15.838 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:15.838 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:15.838 Cannot find device "nvmf_init_br2" 00:19:15.838 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:15.838 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:16.097 Cannot find device "nvmf_tgt_br" 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:16.097 Cannot find device "nvmf_tgt_br2" 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:16.097 Cannot find device "nvmf_init_br" 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:16.097 Cannot find device "nvmf_init_br2" 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:16.097 Cannot find device "nvmf_tgt_br" 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:16.097 Cannot find device "nvmf_tgt_br2" 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:16.097 Cannot find device "nvmf_br" 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:16.097 Cannot find device "nvmf_init_if" 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:16.097 Cannot find device "nvmf_init_if2" 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:16.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:16.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:16.097 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:16.098 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:16.357 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:16.357 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:19:16.357 00:19:16.357 --- 10.0.0.3 ping statistics --- 00:19:16.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.357 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:16.357 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:16.357 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:19:16.357 00:19:16.357 --- 10.0.0.4 ping statistics --- 00:19:16.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.357 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:16.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:19:16.357 00:19:16.357 --- 10.0.0.1 ping statistics --- 00:19:16.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.357 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:16.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:19:16.357 00:19:16.357 --- 10.0.0.2 ping statistics --- 00:19:16.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.357 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:16.357 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=91224 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 91224 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 91224 ']' 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:16.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:16.358 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.358 [2024-10-29 11:08:21.740632] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:19:16.358 [2024-10-29 11:08:21.740720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.617 [2024-10-29 11:08:21.889422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.617 [2024-10-29 11:08:21.907301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.617 [2024-10-29 11:08:21.907404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.617 [2024-10-29 11:08:21.907431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.617 [2024-10-29 11:08:21.907438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.617 [2024-10-29 11:08:21.907444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.617 [2024-10-29 11:08:21.907756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.617 [2024-10-29 11:08:21.934547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:16.617 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:16.617 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:19:16.617 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:16.617 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:16.617 11:08:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.617 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:16.617 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:16.617 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.617 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.617 [2024-10-29 11:08:22.033117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.617 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.617 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:16.617 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.617 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.617 [2024-10-29 11:08:22.041230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.618 null0 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.618 null1 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91244 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91244 /tmp/host.sock 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 91244 ']' 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:16.618 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:16.618 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:16.877 [2024-10-29 11:08:22.135640] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:19:16.877 [2024-10-29 11:08:22.135748] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91244 ] 00:19:16.877 [2024-10-29 11:08:22.289312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.877 [2024-10-29 11:08:22.312852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.877 [2024-10-29 11:08:22.345153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:17.135 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:17.135 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:19:17.135 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.135 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:17.135 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.135 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.135 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.135 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:17.135 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:17.136 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.395 [2024-10-29 11:08:22.765346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:19:17.395 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:17.396 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.396 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.396 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:17.656 11:08:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.656 11:08:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:19:17.656 11:08:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:19:17.915 [2024-10-29 11:08:23.405826] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:17.915 [2024-10-29 11:08:23.405851] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:17.915 [2024-10-29 11:08:23.405882] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:17.916 [2024-10-29 11:08:23.411861] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:18.174 [2024-10-29 11:08:23.466175] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:18.174 [2024-10-29 11:08:23.467003] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xea3480:1 started. 00:19:18.174 [2024-10-29 11:08:23.468626] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:18.174 [2024-10-29 11:08:23.468651] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:18.174 [2024-10-29 11:08:23.474333] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xea3480 was disconnected and freed. delete nvme_qpair. 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:18.744 [2024-10-29 11:08:24.237666] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe71140:1 started. 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:18.744 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:19.004 [2024-10-29 11:08:24.245034] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe71140 was disconnected and freed. delete nvme_qpair. 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.004 [2024-10-29 11:08:24.354691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:19.004 [2024-10-29 11:08:24.354986] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:19.004 [2024-10-29 11:08:24.355008] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:19.004 [2024-10-29 11:08:24.360970] bdev_nvme.c:7215:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:19:19.004 [2024-10-29 11:08:24.419339] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:19:19.004 [2024-10-29 11:08:24.419376] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:19.004 [2024-10-29 11:08:24.419401] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:19.004 [2024-10-29 11:08:24.419423] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.004 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:19.005 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.264 [2024-10-29 11:08:24.583205] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:19.264 [2024-10-29 11:08:24.583235] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:19.264 [2024-10-29 11:08:24.586283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.264 [2024-10-29 11:08:24.586329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.264 [2024-10-29 11:08:24.586357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.264 [2024-10-29 11:08:24.586365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.264 [2024-10-29 11:08:24.586374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.264 [2024-10-29 11:08:24.586397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.264 [2024-10-29 11:08:24.586431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.264 [2024-10-29 11:08:24.586441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.264 [2024-10-29 11:08:24.586449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe736e0 is same with the state(6) to be set 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.264 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:19.265 [2024-10-29 11:08:24.589219] bdev_nvme.c:7078:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:19.265 [2024-10-29 11:08:24.589240] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:19.265 [2024-10-29 11:08:24.589291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe736e0 (9): Bad file descriptor 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.265 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.525 11:08:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.904 [2024-10-29 11:08:25.986535] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:20.904 [2024-10-29 11:08:25.986559] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:20.904 [2024-10-29 11:08:25.986591] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:20.904 [2024-10-29 11:08:25.992568] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:20.904 [2024-10-29 11:08:26.050865] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:19:20.904 [2024-10-29 11:08:26.051514] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xe6de60:1 started. 00:19:20.904 [2024-10-29 11:08:26.053322] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:20.904 [2024-10-29 11:08:26.053375] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.904 [2024-10-29 11:08:26.055528] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xe6de60 was disconnected and freed. delete nvme_qpair. 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.904 request: 00:19:20.904 { 00:19:20.904 "name": "nvme", 00:19:20.904 "trtype": "tcp", 00:19:20.904 "traddr": "10.0.0.3", 00:19:20.904 "adrfam": "ipv4", 00:19:20.904 "trsvcid": "8009", 00:19:20.904 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:20.904 "wait_for_attach": true, 00:19:20.904 "method": "bdev_nvme_start_discovery", 00:19:20.904 "req_id": 1 00:19:20.904 } 00:19:20.904 Got JSON-RPC error response 00:19:20.904 response: 00:19:20.904 { 00:19:20.904 "code": -17, 00:19:20.904 "message": "File exists" 00:19:20.904 } 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.904 request: 00:19:20.904 { 00:19:20.904 "name": "nvme_second", 00:19:20.904 "trtype": "tcp", 00:19:20.904 "traddr": "10.0.0.3", 00:19:20.904 "adrfam": "ipv4", 00:19:20.904 "trsvcid": "8009", 00:19:20.904 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:20.904 "wait_for_attach": true, 00:19:20.904 "method": "bdev_nvme_start_discovery", 00:19:20.904 "req_id": 1 00:19:20.904 } 00:19:20.904 Got JSON-RPC error response 00:19:20.904 response: 00:19:20.904 { 00:19:20.904 "code": -17, 00:19:20.904 "message": "File exists" 00:19:20.904 } 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:20.904 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.905 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:20.905 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.905 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:20.905 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.905 11:08:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:21.842 [2024-10-29 11:08:27.321953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:21.842 [2024-10-29 11:08:27.322025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea2ea0 with addr=10.0.0.3, port=8010 00:19:21.842 [2024-10-29 11:08:27.322042] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:21.842 [2024-10-29 11:08:27.322051] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:21.842 [2024-10-29 11:08:27.322059] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:23.220 [2024-10-29 11:08:28.321952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:23.221 [2024-10-29 11:08:28.322023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea2ea0 with addr=10.0.0.3, port=8010 00:19:23.221 [2024-10-29 11:08:28.322040] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:23.221 [2024-10-29 11:08:28.322048] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:23.221 [2024-10-29 11:08:28.322055] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:24.157 [2024-10-29 11:08:29.321868] bdev_nvme.c:7334:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:24.157 request: 00:19:24.157 { 00:19:24.157 "name": "nvme_second", 00:19:24.157 "trtype": "tcp", 00:19:24.157 "traddr": "10.0.0.3", 00:19:24.157 "adrfam": "ipv4", 00:19:24.157 "trsvcid": "8010", 00:19:24.157 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:24.157 "wait_for_attach": false, 00:19:24.157 "attach_timeout_ms": 3000, 00:19:24.157 "method": "bdev_nvme_start_discovery", 00:19:24.157 "req_id": 1 00:19:24.157 } 00:19:24.157 Got JSON-RPC error response 00:19:24.157 response: 00:19:24.157 { 00:19:24.157 "code": -110, 00:19:24.157 "message": "Connection timed out" 00:19:24.157 } 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91244 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.157 rmmod nvme_tcp 00:19:24.157 rmmod nvme_fabrics 00:19:24.157 rmmod nvme_keyring 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 91224 ']' 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 91224 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 91224 ']' 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 91224 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91224 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:24.157 killing process with pid 91224 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91224' 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 91224 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 91224 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:24.157 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:24.415 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:24.415 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:24.415 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.415 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:24.415 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:24.415 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:24.415 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:24.415 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:24.416 00:19:24.416 real 0m8.791s 00:19:24.416 user 0m16.881s 00:19:24.416 sys 0m1.813s 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:24.416 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:24.416 ************************************ 00:19:24.416 END TEST nvmf_host_discovery 00:19:24.416 ************************************ 00:19:24.674 11:08:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:24.674 11:08:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:24.674 11:08:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:24.674 11:08:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.674 ************************************ 00:19:24.674 START TEST nvmf_host_multipath_status 00:19:24.674 ************************************ 00:19:24.674 11:08:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:24.674 * Looking for test storage... 00:19:24.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:24.674 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:24.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.675 --rc genhtml_branch_coverage=1 00:19:24.675 --rc genhtml_function_coverage=1 00:19:24.675 --rc genhtml_legend=1 00:19:24.675 --rc geninfo_all_blocks=1 00:19:24.675 --rc geninfo_unexecuted_blocks=1 00:19:24.675 00:19:24.675 ' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:24.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.675 --rc genhtml_branch_coverage=1 00:19:24.675 --rc genhtml_function_coverage=1 00:19:24.675 --rc genhtml_legend=1 00:19:24.675 --rc geninfo_all_blocks=1 00:19:24.675 --rc geninfo_unexecuted_blocks=1 00:19:24.675 00:19:24.675 ' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:24.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.675 --rc genhtml_branch_coverage=1 00:19:24.675 --rc genhtml_function_coverage=1 00:19:24.675 --rc genhtml_legend=1 00:19:24.675 --rc geninfo_all_blocks=1 00:19:24.675 --rc geninfo_unexecuted_blocks=1 00:19:24.675 00:19:24.675 ' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:24.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.675 --rc genhtml_branch_coverage=1 00:19:24.675 --rc genhtml_function_coverage=1 00:19:24.675 --rc genhtml_legend=1 00:19:24.675 --rc geninfo_all_blocks=1 00:19:24.675 --rc geninfo_unexecuted_blocks=1 00:19:24.675 00:19:24.675 ' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:24.675 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:24.675 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:24.676 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:24.676 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.676 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:24.676 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:24.676 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:24.676 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:24.676 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:24.676 Cannot find device "nvmf_init_br" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:24.933 Cannot find device "nvmf_init_br2" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:24.933 Cannot find device "nvmf_tgt_br" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:24.933 Cannot find device "nvmf_tgt_br2" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:24.933 Cannot find device "nvmf_init_br" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:24.933 Cannot find device "nvmf_init_br2" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:24.933 Cannot find device "nvmf_tgt_br" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:24.933 Cannot find device "nvmf_tgt_br2" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:24.933 Cannot find device "nvmf_br" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:24.933 Cannot find device "nvmf_init_if" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:24.933 Cannot find device "nvmf_init_if2" 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:24.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:24.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:24.933 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:25.192 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:25.192 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:19:25.192 00:19:25.192 --- 10.0.0.3 ping statistics --- 00:19:25.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.192 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:25.192 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:25.192 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:19:25.192 00:19:25.192 --- 10.0.0.4 ping statistics --- 00:19:25.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.192 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:25.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:25.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:25.192 00:19:25.192 --- 10.0.0.1 ping statistics --- 00:19:25.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.192 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:25.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:25.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:19:25.192 00:19:25.192 --- 10.0.0.2 ping statistics --- 00:19:25.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:25.192 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:25.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=91743 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 91743 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 91743 ']' 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.192 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:25.193 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:25.193 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.193 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:25.193 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:25.193 [2024-10-29 11:08:30.656414] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:19:25.193 [2024-10-29 11:08:30.656514] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.452 [2024-10-29 11:08:30.812719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:25.452 [2024-10-29 11:08:30.837139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.452 [2024-10-29 11:08:30.837455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.452 [2024-10-29 11:08:30.837635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.452 [2024-10-29 11:08:30.837775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.452 [2024-10-29 11:08:30.837825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.452 [2024-10-29 11:08:30.838822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.452 [2024-10-29 11:08:30.838836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.452 [2024-10-29 11:08:30.874047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:25.452 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:25.452 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:19:25.452 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.452 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.452 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:25.715 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.715 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91743 00:19:25.715 11:08:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:25.979 [2024-10-29 11:08:31.268862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.979 11:08:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:26.237 Malloc0 00:19:26.237 11:08:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:26.496 11:08:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:26.755 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:27.014 [2024-10-29 11:08:32.330939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:27.014 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:27.273 [2024-10-29 11:08:32.562994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:27.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.273 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91791 00:19:27.274 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:27.274 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.274 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91791 /var/tmp/bdevperf.sock 00:19:27.274 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 91791 ']' 00:19:27.274 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.274 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:27.274 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.274 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:27.274 11:08:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:28.209 11:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:28.209 11:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:19:28.209 11:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:28.467 11:08:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:28.727 Nvme0n1 00:19:28.727 11:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:28.986 Nvme0n1 00:19:29.245 11:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:29.245 11:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:31.150 11:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:31.151 11:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:31.410 11:08:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:31.670 11:08:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:32.607 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:32.608 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:32.608 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.608 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:32.867 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.867 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:32.867 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.867 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:33.127 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:33.127 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:33.127 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.127 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:33.386 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.386 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:33.386 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.386 11:08:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:33.645 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.645 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:33.646 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:33.646 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.905 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.905 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:33.905 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.905 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:34.171 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.171 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:34.171 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:34.429 11:08:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:34.688 11:08:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:35.629 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:35.629 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:35.629 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.629 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:35.889 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:35.889 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:35.889 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.889 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:36.147 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.147 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:36.147 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.147 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:36.406 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.406 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:36.406 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.406 11:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:36.976 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.976 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:36.976 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:36.976 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.976 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.976 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:36.976 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:36.976 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:37.235 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.235 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:37.235 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:37.494 11:08:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:37.753 11:08:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:38.691 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:38.691 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:38.691 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.691 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:38.951 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.951 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:38.951 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:38.951 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.260 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:39.260 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:39.260 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.260 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:39.527 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:39.527 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:39.527 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:39.527 11:08:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.787 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:39.787 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:39.787 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:39.787 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:40.046 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.046 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:40.046 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:40.046 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:40.306 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:40.306 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:40.306 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:40.566 11:08:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:40.825 11:08:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:41.762 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:41.763 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:41.763 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:41.763 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:42.331 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:42.331 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:42.331 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.331 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:42.331 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:42.331 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:42.331 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.331 11:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:42.590 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:42.590 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:42.590 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:42.590 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:43.159 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.159 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:43.159 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.159 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:43.159 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:43.159 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:43.159 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:43.159 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:43.418 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:43.418 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:43.418 11:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:43.676 11:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:43.935 11:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:44.870 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:44.870 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:44.870 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:44.870 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:45.438 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:45.438 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:45.438 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.438 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:45.438 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:45.438 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:45.438 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.438 11:08:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:45.697 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:45.697 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:45.697 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:45.697 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:45.956 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:45.956 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:45.956 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:45.956 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.214 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:46.214 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:46.214 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:46.214 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:46.473 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:46.473 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:46.473 11:08:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:46.732 11:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:46.991 11:08:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:48.369 11:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:48.369 11:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:48.369 11:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.369 11:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:48.369 11:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:48.369 11:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:48.369 11:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.369 11:08:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:48.628 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:48.628 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:48.628 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.628 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:48.887 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:48.887 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:48.887 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.887 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:49.146 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.146 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:49.146 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.146 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:49.405 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:49.405 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:49.405 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.405 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:49.665 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.665 11:08:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:49.925 11:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:49.925 11:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:50.184 11:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:50.443 11:08:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:51.438 11:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:51.438 11:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:51.438 11:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.438 11:08:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:51.701 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.701 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:51.701 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.701 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:51.960 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.960 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:51.960 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.960 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:52.218 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.218 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:52.218 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.218 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:52.475 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.475 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:52.475 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.475 11:08:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:52.733 11:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.733 11:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:52.733 11:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:52.733 11:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.990 11:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.990 11:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:52.990 11:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:53.247 11:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:53.541 11:08:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:54.473 11:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:54.473 11:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:54.473 11:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.473 11:08:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:54.731 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:54.731 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:54.731 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.731 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:54.990 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.990 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:54.990 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.990 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:55.249 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.249 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:55.249 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.249 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:55.508 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.508 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:55.508 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.508 11:09:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:55.768 11:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.768 11:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:55.768 11:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.768 11:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:56.027 11:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.027 11:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:56.027 11:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:56.287 11:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:56.547 11:09:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:57.485 11:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:57.485 11:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:57.744 11:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:57.744 11:09:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.004 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.004 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:58.004 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.004 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:58.262 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.262 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:58.262 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:58.262 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.521 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.521 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:58.521 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:58.521 11:09:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.780 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.780 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:58.780 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.780 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:59.039 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.039 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:59.039 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.039 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:59.299 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.299 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:59.299 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:59.558 11:09:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:59.815 11:09:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:00.750 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:00.750 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:00.750 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.750 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:01.009 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.009 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:01.009 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.009 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:01.268 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:01.268 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:01.268 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.268 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:01.527 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.527 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:01.527 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:01.528 11:09:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.788 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.788 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:01.788 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:01.788 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.356 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.356 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:02.356 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.356 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91791 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 91791 ']' 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 91791 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91791 00:20:02.621 killing process with pid 91791 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91791' 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 91791 00:20:02.621 11:09:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 91791 00:20:02.621 { 00:20:02.621 "results": [ 00:20:02.621 { 00:20:02.621 "job": "Nvme0n1", 00:20:02.621 "core_mask": "0x4", 00:20:02.621 "workload": "verify", 00:20:02.621 "status": "terminated", 00:20:02.621 "verify_range": { 00:20:02.621 "start": 0, 00:20:02.621 "length": 16384 00:20:02.621 }, 00:20:02.621 "queue_depth": 128, 00:20:02.621 "io_size": 4096, 00:20:02.621 "runtime": 33.331377, 00:20:02.621 "iops": 9360.399361838547, 00:20:02.621 "mibps": 36.56406000718182, 00:20:02.621 "io_failed": 0, 00:20:02.621 "io_timeout": 0, 00:20:02.621 "avg_latency_us": 13647.112837437662, 00:20:02.621 "min_latency_us": 871.3309090909091, 00:20:02.621 "max_latency_us": 4026531.84 00:20:02.621 } 00:20:02.621 ], 00:20:02.621 "core_count": 1 00:20:02.621 } 00:20:02.621 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91791 00:20:02.621 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:02.621 [2024-10-29 11:08:32.637015] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:20:02.621 [2024-10-29 11:08:32.637121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91791 ] 00:20:02.621 [2024-10-29 11:08:32.786514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.621 [2024-10-29 11:08:32.805341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.621 [2024-10-29 11:08:32.834630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:02.621 Running I/O for 90 seconds... 00:20:02.621 7957.00 IOPS, 31.08 MiB/s [2024-10-29T11:09:08.118Z] 8010.50 IOPS, 31.29 MiB/s [2024-10-29T11:09:08.118Z] 7985.67 IOPS, 31.19 MiB/s [2024-10-29T11:09:08.118Z] 7973.25 IOPS, 31.15 MiB/s [2024-10-29T11:09:08.118Z] 7940.20 IOPS, 31.02 MiB/s [2024-10-29T11:09:08.118Z] 7945.83 IOPS, 31.04 MiB/s [2024-10-29T11:09:08.118Z] 8017.00 IOPS, 31.32 MiB/s [2024-10-29T11:09:08.118Z] 8317.75 IOPS, 32.49 MiB/s [2024-10-29T11:09:08.118Z] 8571.11 IOPS, 33.48 MiB/s [2024-10-29T11:09:08.118Z] 8795.60 IOPS, 34.36 MiB/s [2024-10-29T11:09:08.118Z] 8958.45 IOPS, 34.99 MiB/s [2024-10-29T11:09:08.118Z] 9110.92 IOPS, 35.59 MiB/s [2024-10-29T11:09:08.118Z] 9251.31 IOPS, 36.14 MiB/s [2024-10-29T11:09:08.118Z] 9356.79 IOPS, 36.55 MiB/s [2024-10-29T11:09:08.118Z] [2024-10-29 11:08:49.096682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.621 [2024-10-29 11:08:49.096744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.096828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.621 [2024-10-29 11:08:49.096848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.096884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.621 [2024-10-29 11:08:49.096898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.096918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.621 [2024-10-29 11:08:49.096933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.096952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.621 [2024-10-29 11:08:49.096966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.096985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.621 [2024-10-29 11:08:49.096999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.621 [2024-10-29 11:08:49.097033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.621 [2024-10-29 11:08:49.097066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-10-29 11:08:49.097100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-10-29 11:08:49.097161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-10-29 11:08:49.097196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-10-29 11:08:49.097230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-10-29 11:08:49.097263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-10-29 11:08:49.097296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-10-29 11:08:49.097331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.621 [2024-10-29 11:08:49.097364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:02.621 [2024-10-29 11:08:49.097384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.097969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.097990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.098005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.098343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.098377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.098424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.098471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.098507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.098551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.098589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.622 [2024-10-29 11:08:49.098625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.622 [2024-10-29 11:08:49.098950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:02.622 [2024-10-29 11:08:49.098969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.098983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.099229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.099263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.099297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.099331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.099364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.099427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.099477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.099514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.099980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.099999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.100014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.100050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.100085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.100119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.100154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.100188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.623 [2024-10-29 11:08:49.100223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.100257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.100292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.100335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.100383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.100422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:02.623 [2024-10-29 11:08:49.100442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.623 [2024-10-29 11:08:49.100457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.100501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.100518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.100539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.100556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.100577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.100592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.100612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.100627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.100649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.100664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.100685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.100700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.100721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.100735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.100756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.100771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.100792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.100807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.101523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.101572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.101613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.101654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.101694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.101734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.101776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.101816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.101912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.101958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.101984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.101999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.102026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.102041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.102081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:08:49.102097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.102124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.102139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.102165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.102180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.102207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.102221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.102247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.102262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.102289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.102304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.102330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.102345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.102383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.102400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:08:49.102428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:08:49.102443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:02.624 9079.13 IOPS, 35.47 MiB/s [2024-10-29T11:09:08.121Z] 8511.69 IOPS, 33.25 MiB/s [2024-10-29T11:09:08.121Z] 8011.00 IOPS, 31.29 MiB/s [2024-10-29T11:09:08.121Z] 7565.94 IOPS, 29.55 MiB/s [2024-10-29T11:09:08.121Z] 7448.53 IOPS, 29.10 MiB/s [2024-10-29T11:09:08.121Z] 7603.50 IOPS, 29.70 MiB/s [2024-10-29T11:09:08.121Z] 7754.95 IOPS, 30.29 MiB/s [2024-10-29T11:09:08.121Z] 8053.77 IOPS, 31.46 MiB/s [2024-10-29T11:09:08.121Z] 8306.00 IOPS, 32.45 MiB/s [2024-10-29T11:09:08.121Z] 8533.38 IOPS, 33.33 MiB/s [2024-10-29T11:09:08.121Z] 8624.24 IOPS, 33.69 MiB/s [2024-10-29T11:09:08.121Z] 8689.77 IOPS, 33.94 MiB/s [2024-10-29T11:09:08.121Z] 8739.78 IOPS, 34.14 MiB/s [2024-10-29T11:09:08.121Z] 8840.79 IOPS, 34.53 MiB/s [2024-10-29T11:09:08.121Z] 9005.97 IOPS, 35.18 MiB/s [2024-10-29T11:09:08.121Z] 9164.57 IOPS, 35.80 MiB/s [2024-10-29T11:09:08.121Z] [2024-10-29 11:09:05.139056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:09:05.139115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:09:05.139179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:09:05.139221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:09:05.139245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:09:05.139260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:09:05.139279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:09:05.139293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:09:05.139313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:09:05.139327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:09:05.139346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.624 [2024-10-29 11:09:05.139361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:09:05.139380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:09:05.139407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:02.624 [2024-10-29 11:09:05.139428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.624 [2024-10-29 11:09:05.139442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.139542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.139640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.139686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.139719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.139753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.139973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.139987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.140020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.140053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.140096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.140129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.140162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.140195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.140228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.140263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.140296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.140330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.140362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.140424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.140459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.140522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.140568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.140606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.625 [2024-10-29 11:09:05.140641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:02.625 [2024-10-29 11:09:05.140662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.625 [2024-10-29 11:09:05.140677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.140697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.140712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.140732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.140746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.140767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.140782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.140831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.140845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.140864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.140878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.140898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.140912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.140931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.140945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.140964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.140979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.141015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.141043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.141064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.141079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.141099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.141113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.141133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.141147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.142646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.142674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.142700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.142716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.142736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.142750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.142770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.142784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.142803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.142817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.142837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.142851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.142870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.142885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.142905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.142919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.142938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.142952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.142985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.143001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.143020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.143035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.143054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.143068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.143088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:02.626 [2024-10-29 11:09:05.143101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.143121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.143135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.143154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.143169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.143189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.143202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:02.626 [2024-10-29 11:09:05.143222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:02.626 [2024-10-29 11:09:05.143236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:02.626 9271.87 IOPS, 36.22 MiB/s [2024-10-29T11:09:08.123Z] 9319.38 IOPS, 36.40 MiB/s [2024-10-29T11:09:08.123Z] 9353.82 IOPS, 36.54 MiB/s [2024-10-29T11:09:08.123Z] Received shutdown signal, test time was about 33.332171 seconds 00:20:02.626 00:20:02.626 Latency(us) 00:20:02.626 [2024-10-29T11:09:08.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.626 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:02.626 Verification LBA range: start 0x0 length 0x4000 00:20:02.626 Nvme0n1 : 33.33 9360.40 36.56 0.00 0.00 13647.11 871.33 4026531.84 00:20:02.626 [2024-10-29T11:09:08.123Z] =================================================================================================================== 00:20:02.626 [2024-10-29T11:09:08.123Z] Total : 9360.40 36.56 0.00 0.00 13647.11 871.33 4026531.84 00:20:02.626 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.887 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:02.887 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:02.887 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:02.887 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:02.887 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:02.887 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:02.887 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:02.887 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:02.887 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:02.887 rmmod nvme_tcp 00:20:02.887 rmmod nvme_fabrics 00:20:02.887 rmmod nvme_keyring 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 91743 ']' 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 91743 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 91743 ']' 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 91743 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91743 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:03.146 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:03.146 killing process with pid 91743 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91743' 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 91743 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 91743 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:03.147 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:03.411 00:20:03.411 real 0m38.854s 00:20:03.411 user 2m5.680s 00:20:03.411 sys 0m11.294s 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:03.411 ************************************ 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:03.411 END TEST nvmf_host_multipath_status 00:20:03.411 ************************************ 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:03.411 11:09:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.411 ************************************ 00:20:03.412 START TEST nvmf_discovery_remove_ifc 00:20:03.412 ************************************ 00:20:03.412 11:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:03.672 * Looking for test storage... 00:20:03.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:03.672 11:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:03.672 11:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:20:03.672 11:09:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:03.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.672 --rc genhtml_branch_coverage=1 00:20:03.672 --rc genhtml_function_coverage=1 00:20:03.672 --rc genhtml_legend=1 00:20:03.672 --rc geninfo_all_blocks=1 00:20:03.672 --rc geninfo_unexecuted_blocks=1 00:20:03.672 00:20:03.672 ' 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:03.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.672 --rc genhtml_branch_coverage=1 00:20:03.672 --rc genhtml_function_coverage=1 00:20:03.672 --rc genhtml_legend=1 00:20:03.672 --rc geninfo_all_blocks=1 00:20:03.672 --rc geninfo_unexecuted_blocks=1 00:20:03.672 00:20:03.672 ' 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:03.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.672 --rc genhtml_branch_coverage=1 00:20:03.672 --rc genhtml_function_coverage=1 00:20:03.672 --rc genhtml_legend=1 00:20:03.672 --rc geninfo_all_blocks=1 00:20:03.672 --rc geninfo_unexecuted_blocks=1 00:20:03.672 00:20:03.672 ' 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:03.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.672 --rc genhtml_branch_coverage=1 00:20:03.672 --rc genhtml_function_coverage=1 00:20:03.672 --rc genhtml_legend=1 00:20:03.672 --rc geninfo_all_blocks=1 00:20:03.672 --rc geninfo_unexecuted_blocks=1 00:20:03.672 00:20:03.672 ' 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.672 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.673 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:03.673 Cannot find device "nvmf_init_br" 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:03.673 Cannot find device "nvmf_init_br2" 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:03.673 Cannot find device "nvmf_tgt_br" 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.673 Cannot find device "nvmf_tgt_br2" 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:03.673 Cannot find device "nvmf_init_br" 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:03.673 Cannot find device "nvmf_init_br2" 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:03.673 Cannot find device "nvmf_tgt_br" 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:03.673 Cannot find device "nvmf_tgt_br2" 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:03.673 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:03.931 Cannot find device "nvmf_br" 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:03.931 Cannot find device "nvmf_init_if" 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:03.931 Cannot find device "nvmf_init_if2" 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:03.931 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:03.931 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:20:03.931 00:20:03.931 --- 10.0.0.3 ping statistics --- 00:20:03.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.931 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:03.931 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:03.931 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:20:03.931 00:20:03.931 --- 10.0.0.4 ping statistics --- 00:20:03.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.931 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:03.931 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:03.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:20:03.931 00:20:03.931 --- 10.0.0.1 ping statistics --- 00:20:03.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.931 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:20:04.190 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:04.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:20:04.191 00:20:04.191 --- 10.0.0.2 ping statistics --- 00:20:04.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.191 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=92625 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 92625 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 92625 ']' 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:04.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:04.191 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.191 [2024-10-29 11:09:09.531759] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:20:04.191 [2024-10-29 11:09:09.532404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.191 [2024-10-29 11:09:09.681086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.449 [2024-10-29 11:09:09.700082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.449 [2024-10-29 11:09:09.700135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.449 [2024-10-29 11:09:09.700159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.449 [2024-10-29 11:09:09.700166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.449 [2024-10-29 11:09:09.700172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.449 [2024-10-29 11:09:09.700559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.449 [2024-10-29 11:09:09.727164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:04.449 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:04.449 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:20:04.449 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.449 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.449 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.449 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.449 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:04.449 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.449 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.450 [2024-10-29 11:09:09.834649] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.450 [2024-10-29 11:09:09.842726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:04.450 null0 00:20:04.450 [2024-10-29 11:09:09.874654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:04.450 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.450 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92644 00:20:04.450 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:04.450 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92644 /tmp/host.sock 00:20:04.450 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 92644 ']' 00:20:04.450 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:20:04.450 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:04.450 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:04.450 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:04.450 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:04.450 11:09:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:04.707 [2024-10-29 11:09:09.958973] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:20:04.707 [2024-10-29 11:09:09.959082] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92644 ] 00:20:04.707 [2024-10-29 11:09:10.112675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.707 [2024-10-29 11:09:10.136671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:05.644 [2024-10-29 11:09:10.895036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.644 11:09:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:06.581 [2024-10-29 11:09:11.927487] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:06.581 [2024-10-29 11:09:11.927513] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:06.581 [2024-10-29 11:09:11.927528] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:06.581 [2024-10-29 11:09:11.933524] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:06.581 [2024-10-29 11:09:11.987906] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:06.581 [2024-10-29 11:09:11.988885] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1b49fc0:1 started. 00:20:06.581 [2024-10-29 11:09:11.990346] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:06.581 [2024-10-29 11:09:11.990420] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:06.581 [2024-10-29 11:09:11.990445] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:06.581 [2024-10-29 11:09:11.990460] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:06.581 [2024-10-29 11:09:11.990479] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:06.581 11:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.581 11:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:06.581 11:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:06.581 11:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.581 11:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.581 11:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:06.581 [2024-10-29 11:09:11.996431] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1b49fc0 was disconnected and freed. delete nvme_qpair. 00:20:06.581 11:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:06.581 11:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:06.581 11:09:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:06.581 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.581 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:06.581 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:06.581 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:06.581 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:06.581 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:06.582 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.582 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.582 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:06.582 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:06.582 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:06.582 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:06.840 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.840 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:06.840 11:09:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:07.777 11:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:07.777 11:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:07.777 11:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.777 11:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:07.777 11:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:07.777 11:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:07.777 11:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:07.777 11:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.777 11:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:07.777 11:09:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:08.715 11:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:08.715 11:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:08.715 11:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.715 11:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:08.715 11:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:08.715 11:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:08.715 11:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:08.715 11:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.973 11:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:08.973 11:09:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:09.909 11:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:09.909 11:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:09.909 11:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.909 11:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:09.909 11:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:09.909 11:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:09.909 11:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:09.909 11:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.910 11:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:09.910 11:09:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:10.847 11:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:10.847 11:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:10.847 11:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:10.847 11:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.847 11:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:10.847 11:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:10.847 11:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:10.847 11:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.106 11:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:11.106 11:09:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:12.042 11:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:12.042 11:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:12.042 11:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:12.042 11:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:12.042 11:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.042 11:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:12.042 11:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:12.043 11:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.043 [2024-10-29 11:09:17.418286] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:12.043 [2024-10-29 11:09:17.418355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.043 [2024-10-29 11:09:17.418369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.043 [2024-10-29 11:09:17.418380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.043 [2024-10-29 11:09:17.418414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.043 [2024-10-29 11:09:17.418424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.043 [2024-10-29 11:09:17.418433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.043 [2024-10-29 11:09:17.418443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.043 [2024-10-29 11:09:17.418451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.043 [2024-10-29 11:09:17.418460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:12.043 [2024-10-29 11:09:17.418468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.043 [2024-10-29 11:09:17.418477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25980 is same with the state(6) to be set 00:20:12.043 11:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:12.043 11:09:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:12.043 [2024-10-29 11:09:17.428281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25980 (9): Bad file descriptor 00:20:12.043 [2024-10-29 11:09:17.438297] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:12.043 [2024-10-29 11:09:17.438335] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:12.043 [2024-10-29 11:09:17.438341] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:12.043 [2024-10-29 11:09:17.438346] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:12.043 [2024-10-29 11:09:17.438416] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:12.978 11:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:12.978 11:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:12.978 11:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:12.978 11:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:12.978 11:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.978 11:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:12.978 11:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:12.978 [2024-10-29 11:09:18.465480] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:12.978 [2024-10-29 11:09:18.465535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b25980 with addr=10.0.0.3, port=4420 00:20:12.978 [2024-10-29 11:09:18.465549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b25980 is same with the state(6) to be set 00:20:12.978 [2024-10-29 11:09:18.465573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b25980 (9): Bad file descriptor 00:20:12.978 [2024-10-29 11:09:18.465931] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:20:12.978 [2024-10-29 11:09:18.465959] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:12.978 [2024-10-29 11:09:18.465968] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:12.978 [2024-10-29 11:09:18.465977] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:12.978 [2024-10-29 11:09:18.465985] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:12.979 [2024-10-29 11:09:18.465999] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:12.979 [2024-10-29 11:09:18.466012] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:12.979 [2024-10-29 11:09:18.466022] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:12.979 [2024-10-29 11:09:18.466027] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:13.237 11:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.238 11:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:13.238 11:09:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:14.175 [2024-10-29 11:09:19.466046] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:14.175 [2024-10-29 11:09:19.466091] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:14.175 [2024-10-29 11:09:19.466110] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:14.175 [2024-10-29 11:09:19.466134] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:14.175 [2024-10-29 11:09:19.466143] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:20:14.175 [2024-10-29 11:09:19.466151] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:14.175 [2024-10-29 11:09:19.466156] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:14.175 [2024-10-29 11:09:19.466172] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:14.175 [2024-10-29 11:09:19.466197] bdev_nvme.c:7042:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:14.175 [2024-10-29 11:09:19.466228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.175 [2024-10-29 11:09:19.466241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.175 [2024-10-29 11:09:19.466252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.175 [2024-10-29 11:09:19.466276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.175 [2024-10-29 11:09:19.466284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.175 [2024-10-29 11:09:19.466292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.175 [2024-10-29 11:09:19.466300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.175 [2024-10-29 11:09:19.466308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.175 [2024-10-29 11:09:19.466317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.175 [2024-10-29 11:09:19.466324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.175 [2024-10-29 11:09:19.466348] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:20:14.175 [2024-10-29 11:09:19.466625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b13f00 (9): Bad file descriptor 00:20:14.175 [2024-10-29 11:09:19.467636] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:14.175 [2024-10-29 11:09:19.467659] nvme_ctrlr.c:1190:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:14.175 11:09:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:15.551 11:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:15.551 11:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:15.551 11:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:15.551 11:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.551 11:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:15.551 11:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:15.551 11:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:15.551 11:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.551 11:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:15.551 11:09:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:16.119 [2024-10-29 11:09:21.471326] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:16.119 [2024-10-29 11:09:21.471349] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:16.119 [2024-10-29 11:09:21.471396] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:16.119 [2024-10-29 11:09:21.477360] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:16.119 [2024-10-29 11:09:21.531676] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:20:16.119 [2024-10-29 11:09:21.532351] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1b00950:1 started. 00:20:16.119 [2024-10-29 11:09:21.533498] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:16.119 [2024-10-29 11:09:21.533575] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:16.119 [2024-10-29 11:09:21.533597] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:16.119 [2024-10-29 11:09:21.533611] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:16.119 [2024-10-29 11:09:21.533619] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:16.119 [2024-10-29 11:09:21.540077] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1b00950 was disconnected and freed. delete nvme_qpair. 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92644 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 92644 ']' 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 92644 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92644 00:20:16.378 killing process with pid 92644 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92644' 00:20:16.378 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 92644 00:20:16.379 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 92644 00:20:16.641 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:16.641 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:16.641 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:16.641 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:16.641 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:16.641 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:16.641 11:09:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:16.641 rmmod nvme_tcp 00:20:16.641 rmmod nvme_fabrics 00:20:16.641 rmmod nvme_keyring 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 92625 ']' 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 92625 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 92625 ']' 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 92625 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92625 00:20:16.641 killing process with pid 92625 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92625' 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 92625 00:20:16.641 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 92625 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.899 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:17.159 ************************************ 00:20:17.159 END TEST nvmf_discovery_remove_ifc 00:20:17.159 ************************************ 00:20:17.159 00:20:17.159 real 0m13.567s 00:20:17.159 user 0m23.469s 00:20:17.159 sys 0m2.381s 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.159 ************************************ 00:20:17.159 START TEST nvmf_identify_kernel_target 00:20:17.159 ************************************ 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:17.159 * Looking for test storage... 00:20:17.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:17.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.159 --rc genhtml_branch_coverage=1 00:20:17.159 --rc genhtml_function_coverage=1 00:20:17.159 --rc genhtml_legend=1 00:20:17.159 --rc geninfo_all_blocks=1 00:20:17.159 --rc geninfo_unexecuted_blocks=1 00:20:17.159 00:20:17.159 ' 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:17.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.159 --rc genhtml_branch_coverage=1 00:20:17.159 --rc genhtml_function_coverage=1 00:20:17.159 --rc genhtml_legend=1 00:20:17.159 --rc geninfo_all_blocks=1 00:20:17.159 --rc geninfo_unexecuted_blocks=1 00:20:17.159 00:20:17.159 ' 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:17.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.159 --rc genhtml_branch_coverage=1 00:20:17.159 --rc genhtml_function_coverage=1 00:20:17.159 --rc genhtml_legend=1 00:20:17.159 --rc geninfo_all_blocks=1 00:20:17.159 --rc geninfo_unexecuted_blocks=1 00:20:17.159 00:20:17.159 ' 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:17.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.159 --rc genhtml_branch_coverage=1 00:20:17.159 --rc genhtml_function_coverage=1 00:20:17.159 --rc genhtml_legend=1 00:20:17.159 --rc geninfo_all_blocks=1 00:20:17.159 --rc geninfo_unexecuted_blocks=1 00:20:17.159 00:20:17.159 ' 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.159 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.160 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.160 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.160 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.160 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.160 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.419 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.419 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:20:17.419 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:20:17.419 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.419 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.419 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:17.419 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.419 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:17.419 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.420 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:17.420 Cannot find device "nvmf_init_br" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:17.420 Cannot find device "nvmf_init_br2" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:17.420 Cannot find device "nvmf_tgt_br" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:17.420 Cannot find device "nvmf_tgt_br2" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:17.420 Cannot find device "nvmf_init_br" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:17.420 Cannot find device "nvmf_init_br2" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:17.420 Cannot find device "nvmf_tgt_br" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:17.420 Cannot find device "nvmf_tgt_br2" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:17.420 Cannot find device "nvmf_br" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:17.420 Cannot find device "nvmf_init_if" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:17.420 Cannot find device "nvmf_init_if2" 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:17.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:17.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:17.420 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:17.679 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:17.680 11:09:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:17.680 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:17.680 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:17.680 00:20:17.680 --- 10.0.0.3 ping statistics --- 00:20:17.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.680 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:17.680 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:17.680 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:20:17.680 00:20:17.680 --- 10.0.0.4 ping statistics --- 00:20:17.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.680 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:17.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:20:17.680 00:20:17.680 --- 10.0.0.1 ping statistics --- 00:20:17.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.680 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:17.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:20:17.680 00:20:17.680 --- 10.0.0.2 ping statistics --- 00:20:17.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.680 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:17.680 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:17.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:18.198 Waiting for block devices as requested 00:20:18.198 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:18.198 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:18.198 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:18.198 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:18.198 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:18.198 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:18.198 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:18.198 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:18.198 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:18.198 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:18.198 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:18.457 No valid GPT data, bailing 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:18.457 No valid GPT data, bailing 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:18.457 No valid GPT data, bailing 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:18.457 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:18.716 No valid GPT data, bailing 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:20:18.716 11:09:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:18.716 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -a 10.0.0.1 -t tcp -s 4420 00:20:18.716 00:20:18.716 Discovery Log Number of Records 2, Generation counter 2 00:20:18.716 =====Discovery Log Entry 0====== 00:20:18.716 trtype: tcp 00:20:18.716 adrfam: ipv4 00:20:18.716 subtype: current discovery subsystem 00:20:18.716 treq: not specified, sq flow control disable supported 00:20:18.716 portid: 1 00:20:18.716 trsvcid: 4420 00:20:18.716 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:18.716 traddr: 10.0.0.1 00:20:18.716 eflags: none 00:20:18.716 sectype: none 00:20:18.716 =====Discovery Log Entry 1====== 00:20:18.716 trtype: tcp 00:20:18.716 adrfam: ipv4 00:20:18.717 subtype: nvme subsystem 00:20:18.717 treq: not specified, sq flow control disable supported 00:20:18.717 portid: 1 00:20:18.717 trsvcid: 4420 00:20:18.717 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:18.717 traddr: 10.0.0.1 00:20:18.717 eflags: none 00:20:18.717 sectype: none 00:20:18.717 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:18.717 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:18.978 ===================================================== 00:20:18.978 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:18.978 ===================================================== 00:20:18.978 Controller Capabilities/Features 00:20:18.978 ================================ 00:20:18.978 Vendor ID: 0000 00:20:18.978 Subsystem Vendor ID: 0000 00:20:18.978 Serial Number: 55fa7a034b7e49f84ece 00:20:18.978 Model Number: Linux 00:20:18.978 Firmware Version: 6.8.9-20 00:20:18.978 Recommended Arb Burst: 0 00:20:18.978 IEEE OUI Identifier: 00 00 00 00:20:18.978 Multi-path I/O 00:20:18.978 May have multiple subsystem ports: No 00:20:18.978 May have multiple controllers: No 00:20:18.978 Associated with SR-IOV VF: No 00:20:18.978 Max Data Transfer Size: Unlimited 00:20:18.978 Max Number of Namespaces: 0 00:20:18.978 Max Number of I/O Queues: 1024 00:20:18.978 NVMe Specification Version (VS): 1.3 00:20:18.978 NVMe Specification Version (Identify): 1.3 00:20:18.978 Maximum Queue Entries: 1024 00:20:18.978 Contiguous Queues Required: No 00:20:18.978 Arbitration Mechanisms Supported 00:20:18.978 Weighted Round Robin: Not Supported 00:20:18.978 Vendor Specific: Not Supported 00:20:18.978 Reset Timeout: 7500 ms 00:20:18.978 Doorbell Stride: 4 bytes 00:20:18.978 NVM Subsystem Reset: Not Supported 00:20:18.978 Command Sets Supported 00:20:18.978 NVM Command Set: Supported 00:20:18.978 Boot Partition: Not Supported 00:20:18.978 Memory Page Size Minimum: 4096 bytes 00:20:18.978 Memory Page Size Maximum: 4096 bytes 00:20:18.978 Persistent Memory Region: Not Supported 00:20:18.978 Optional Asynchronous Events Supported 00:20:18.978 Namespace Attribute Notices: Not Supported 00:20:18.978 Firmware Activation Notices: Not Supported 00:20:18.978 ANA Change Notices: Not Supported 00:20:18.978 PLE Aggregate Log Change Notices: Not Supported 00:20:18.978 LBA Status Info Alert Notices: Not Supported 00:20:18.978 EGE Aggregate Log Change Notices: Not Supported 00:20:18.978 Normal NVM Subsystem Shutdown event: Not Supported 00:20:18.978 Zone Descriptor Change Notices: Not Supported 00:20:18.978 Discovery Log Change Notices: Supported 00:20:18.978 Controller Attributes 00:20:18.978 128-bit Host Identifier: Not Supported 00:20:18.978 Non-Operational Permissive Mode: Not Supported 00:20:18.978 NVM Sets: Not Supported 00:20:18.978 Read Recovery Levels: Not Supported 00:20:18.978 Endurance Groups: Not Supported 00:20:18.978 Predictable Latency Mode: Not Supported 00:20:18.978 Traffic Based Keep ALive: Not Supported 00:20:18.978 Namespace Granularity: Not Supported 00:20:18.978 SQ Associations: Not Supported 00:20:18.978 UUID List: Not Supported 00:20:18.978 Multi-Domain Subsystem: Not Supported 00:20:18.978 Fixed Capacity Management: Not Supported 00:20:18.978 Variable Capacity Management: Not Supported 00:20:18.978 Delete Endurance Group: Not Supported 00:20:18.978 Delete NVM Set: Not Supported 00:20:18.978 Extended LBA Formats Supported: Not Supported 00:20:18.978 Flexible Data Placement Supported: Not Supported 00:20:18.978 00:20:18.978 Controller Memory Buffer Support 00:20:18.978 ================================ 00:20:18.978 Supported: No 00:20:18.978 00:20:18.978 Persistent Memory Region Support 00:20:18.978 ================================ 00:20:18.978 Supported: No 00:20:18.978 00:20:18.978 Admin Command Set Attributes 00:20:18.978 ============================ 00:20:18.978 Security Send/Receive: Not Supported 00:20:18.978 Format NVM: Not Supported 00:20:18.978 Firmware Activate/Download: Not Supported 00:20:18.978 Namespace Management: Not Supported 00:20:18.978 Device Self-Test: Not Supported 00:20:18.978 Directives: Not Supported 00:20:18.978 NVMe-MI: Not Supported 00:20:18.978 Virtualization Management: Not Supported 00:20:18.978 Doorbell Buffer Config: Not Supported 00:20:18.978 Get LBA Status Capability: Not Supported 00:20:18.978 Command & Feature Lockdown Capability: Not Supported 00:20:18.978 Abort Command Limit: 1 00:20:18.978 Async Event Request Limit: 1 00:20:18.978 Number of Firmware Slots: N/A 00:20:18.978 Firmware Slot 1 Read-Only: N/A 00:20:18.978 Firmware Activation Without Reset: N/A 00:20:18.978 Multiple Update Detection Support: N/A 00:20:18.978 Firmware Update Granularity: No Information Provided 00:20:18.978 Per-Namespace SMART Log: No 00:20:18.978 Asymmetric Namespace Access Log Page: Not Supported 00:20:18.978 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:18.978 Command Effects Log Page: Not Supported 00:20:18.978 Get Log Page Extended Data: Supported 00:20:18.978 Telemetry Log Pages: Not Supported 00:20:18.978 Persistent Event Log Pages: Not Supported 00:20:18.978 Supported Log Pages Log Page: May Support 00:20:18.978 Commands Supported & Effects Log Page: Not Supported 00:20:18.978 Feature Identifiers & Effects Log Page:May Support 00:20:18.978 NVMe-MI Commands & Effects Log Page: May Support 00:20:18.978 Data Area 4 for Telemetry Log: Not Supported 00:20:18.978 Error Log Page Entries Supported: 1 00:20:18.978 Keep Alive: Not Supported 00:20:18.978 00:20:18.978 NVM Command Set Attributes 00:20:18.978 ========================== 00:20:18.978 Submission Queue Entry Size 00:20:18.978 Max: 1 00:20:18.978 Min: 1 00:20:18.978 Completion Queue Entry Size 00:20:18.978 Max: 1 00:20:18.978 Min: 1 00:20:18.978 Number of Namespaces: 0 00:20:18.978 Compare Command: Not Supported 00:20:18.978 Write Uncorrectable Command: Not Supported 00:20:18.978 Dataset Management Command: Not Supported 00:20:18.979 Write Zeroes Command: Not Supported 00:20:18.979 Set Features Save Field: Not Supported 00:20:18.979 Reservations: Not Supported 00:20:18.979 Timestamp: Not Supported 00:20:18.979 Copy: Not Supported 00:20:18.979 Volatile Write Cache: Not Present 00:20:18.979 Atomic Write Unit (Normal): 1 00:20:18.979 Atomic Write Unit (PFail): 1 00:20:18.979 Atomic Compare & Write Unit: 1 00:20:18.979 Fused Compare & Write: Not Supported 00:20:18.979 Scatter-Gather List 00:20:18.979 SGL Command Set: Supported 00:20:18.979 SGL Keyed: Not Supported 00:20:18.979 SGL Bit Bucket Descriptor: Not Supported 00:20:18.979 SGL Metadata Pointer: Not Supported 00:20:18.979 Oversized SGL: Not Supported 00:20:18.979 SGL Metadata Address: Not Supported 00:20:18.979 SGL Offset: Supported 00:20:18.979 Transport SGL Data Block: Not Supported 00:20:18.979 Replay Protected Memory Block: Not Supported 00:20:18.979 00:20:18.979 Firmware Slot Information 00:20:18.979 ========================= 00:20:18.979 Active slot: 0 00:20:18.979 00:20:18.979 00:20:18.979 Error Log 00:20:18.979 ========= 00:20:18.979 00:20:18.979 Active Namespaces 00:20:18.979 ================= 00:20:18.979 Discovery Log Page 00:20:18.979 ================== 00:20:18.979 Generation Counter: 2 00:20:18.979 Number of Records: 2 00:20:18.979 Record Format: 0 00:20:18.979 00:20:18.979 Discovery Log Entry 0 00:20:18.979 ---------------------- 00:20:18.979 Transport Type: 3 (TCP) 00:20:18.979 Address Family: 1 (IPv4) 00:20:18.979 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:18.979 Entry Flags: 00:20:18.979 Duplicate Returned Information: 0 00:20:18.979 Explicit Persistent Connection Support for Discovery: 0 00:20:18.979 Transport Requirements: 00:20:18.979 Secure Channel: Not Specified 00:20:18.979 Port ID: 1 (0x0001) 00:20:18.979 Controller ID: 65535 (0xffff) 00:20:18.979 Admin Max SQ Size: 32 00:20:18.979 Transport Service Identifier: 4420 00:20:18.979 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:18.979 Transport Address: 10.0.0.1 00:20:18.979 Discovery Log Entry 1 00:20:18.979 ---------------------- 00:20:18.979 Transport Type: 3 (TCP) 00:20:18.979 Address Family: 1 (IPv4) 00:20:18.979 Subsystem Type: 2 (NVM Subsystem) 00:20:18.979 Entry Flags: 00:20:18.979 Duplicate Returned Information: 0 00:20:18.979 Explicit Persistent Connection Support for Discovery: 0 00:20:18.979 Transport Requirements: 00:20:18.979 Secure Channel: Not Specified 00:20:18.979 Port ID: 1 (0x0001) 00:20:18.979 Controller ID: 65535 (0xffff) 00:20:18.979 Admin Max SQ Size: 32 00:20:18.979 Transport Service Identifier: 4420 00:20:18.979 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:18.979 Transport Address: 10.0.0.1 00:20:18.979 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:18.979 get_feature(0x01) failed 00:20:18.979 get_feature(0x02) failed 00:20:18.979 get_feature(0x04) failed 00:20:18.979 ===================================================== 00:20:18.979 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:18.979 ===================================================== 00:20:18.979 Controller Capabilities/Features 00:20:18.979 ================================ 00:20:18.979 Vendor ID: 0000 00:20:18.979 Subsystem Vendor ID: 0000 00:20:18.979 Serial Number: fc7d81c672e410cef688 00:20:18.979 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:18.979 Firmware Version: 6.8.9-20 00:20:18.979 Recommended Arb Burst: 6 00:20:18.979 IEEE OUI Identifier: 00 00 00 00:20:18.979 Multi-path I/O 00:20:18.979 May have multiple subsystem ports: Yes 00:20:18.979 May have multiple controllers: Yes 00:20:18.979 Associated with SR-IOV VF: No 00:20:18.979 Max Data Transfer Size: Unlimited 00:20:18.979 Max Number of Namespaces: 1024 00:20:18.979 Max Number of I/O Queues: 128 00:20:18.979 NVMe Specification Version (VS): 1.3 00:20:18.979 NVMe Specification Version (Identify): 1.3 00:20:18.979 Maximum Queue Entries: 1024 00:20:18.979 Contiguous Queues Required: No 00:20:18.979 Arbitration Mechanisms Supported 00:20:18.979 Weighted Round Robin: Not Supported 00:20:18.979 Vendor Specific: Not Supported 00:20:18.979 Reset Timeout: 7500 ms 00:20:18.979 Doorbell Stride: 4 bytes 00:20:18.979 NVM Subsystem Reset: Not Supported 00:20:18.979 Command Sets Supported 00:20:18.979 NVM Command Set: Supported 00:20:18.979 Boot Partition: Not Supported 00:20:18.979 Memory Page Size Minimum: 4096 bytes 00:20:18.979 Memory Page Size Maximum: 4096 bytes 00:20:18.979 Persistent Memory Region: Not Supported 00:20:18.979 Optional Asynchronous Events Supported 00:20:18.979 Namespace Attribute Notices: Supported 00:20:18.979 Firmware Activation Notices: Not Supported 00:20:18.979 ANA Change Notices: Supported 00:20:18.979 PLE Aggregate Log Change Notices: Not Supported 00:20:18.979 LBA Status Info Alert Notices: Not Supported 00:20:18.979 EGE Aggregate Log Change Notices: Not Supported 00:20:18.979 Normal NVM Subsystem Shutdown event: Not Supported 00:20:18.979 Zone Descriptor Change Notices: Not Supported 00:20:18.979 Discovery Log Change Notices: Not Supported 00:20:18.979 Controller Attributes 00:20:18.979 128-bit Host Identifier: Supported 00:20:18.979 Non-Operational Permissive Mode: Not Supported 00:20:18.979 NVM Sets: Not Supported 00:20:18.979 Read Recovery Levels: Not Supported 00:20:18.979 Endurance Groups: Not Supported 00:20:18.979 Predictable Latency Mode: Not Supported 00:20:18.979 Traffic Based Keep ALive: Supported 00:20:18.979 Namespace Granularity: Not Supported 00:20:18.979 SQ Associations: Not Supported 00:20:18.979 UUID List: Not Supported 00:20:18.979 Multi-Domain Subsystem: Not Supported 00:20:18.979 Fixed Capacity Management: Not Supported 00:20:18.979 Variable Capacity Management: Not Supported 00:20:18.979 Delete Endurance Group: Not Supported 00:20:18.979 Delete NVM Set: Not Supported 00:20:18.979 Extended LBA Formats Supported: Not Supported 00:20:18.979 Flexible Data Placement Supported: Not Supported 00:20:18.979 00:20:18.979 Controller Memory Buffer Support 00:20:18.979 ================================ 00:20:18.979 Supported: No 00:20:18.979 00:20:18.979 Persistent Memory Region Support 00:20:18.979 ================================ 00:20:18.979 Supported: No 00:20:18.979 00:20:18.979 Admin Command Set Attributes 00:20:18.979 ============================ 00:20:18.979 Security Send/Receive: Not Supported 00:20:18.979 Format NVM: Not Supported 00:20:18.979 Firmware Activate/Download: Not Supported 00:20:18.979 Namespace Management: Not Supported 00:20:18.979 Device Self-Test: Not Supported 00:20:18.979 Directives: Not Supported 00:20:18.979 NVMe-MI: Not Supported 00:20:18.979 Virtualization Management: Not Supported 00:20:18.979 Doorbell Buffer Config: Not Supported 00:20:18.979 Get LBA Status Capability: Not Supported 00:20:18.979 Command & Feature Lockdown Capability: Not Supported 00:20:18.979 Abort Command Limit: 4 00:20:18.979 Async Event Request Limit: 4 00:20:18.979 Number of Firmware Slots: N/A 00:20:18.979 Firmware Slot 1 Read-Only: N/A 00:20:18.979 Firmware Activation Without Reset: N/A 00:20:18.979 Multiple Update Detection Support: N/A 00:20:18.979 Firmware Update Granularity: No Information Provided 00:20:18.979 Per-Namespace SMART Log: Yes 00:20:18.979 Asymmetric Namespace Access Log Page: Supported 00:20:18.979 ANA Transition Time : 10 sec 00:20:18.979 00:20:18.979 Asymmetric Namespace Access Capabilities 00:20:18.979 ANA Optimized State : Supported 00:20:18.979 ANA Non-Optimized State : Supported 00:20:18.979 ANA Inaccessible State : Supported 00:20:18.979 ANA Persistent Loss State : Supported 00:20:18.979 ANA Change State : Supported 00:20:18.979 ANAGRPID is not changed : No 00:20:18.979 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:18.979 00:20:18.979 ANA Group Identifier Maximum : 128 00:20:18.979 Number of ANA Group Identifiers : 128 00:20:18.979 Max Number of Allowed Namespaces : 1024 00:20:18.979 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:18.979 Command Effects Log Page: Supported 00:20:18.979 Get Log Page Extended Data: Supported 00:20:18.979 Telemetry Log Pages: Not Supported 00:20:18.979 Persistent Event Log Pages: Not Supported 00:20:18.979 Supported Log Pages Log Page: May Support 00:20:18.979 Commands Supported & Effects Log Page: Not Supported 00:20:18.979 Feature Identifiers & Effects Log Page:May Support 00:20:18.979 NVMe-MI Commands & Effects Log Page: May Support 00:20:18.979 Data Area 4 for Telemetry Log: Not Supported 00:20:18.979 Error Log Page Entries Supported: 128 00:20:18.979 Keep Alive: Supported 00:20:18.979 Keep Alive Granularity: 1000 ms 00:20:18.979 00:20:18.979 NVM Command Set Attributes 00:20:18.979 ========================== 00:20:18.979 Submission Queue Entry Size 00:20:18.979 Max: 64 00:20:18.979 Min: 64 00:20:18.979 Completion Queue Entry Size 00:20:18.979 Max: 16 00:20:18.979 Min: 16 00:20:18.979 Number of Namespaces: 1024 00:20:18.979 Compare Command: Not Supported 00:20:18.979 Write Uncorrectable Command: Not Supported 00:20:18.979 Dataset Management Command: Supported 00:20:18.980 Write Zeroes Command: Supported 00:20:18.980 Set Features Save Field: Not Supported 00:20:18.980 Reservations: Not Supported 00:20:18.980 Timestamp: Not Supported 00:20:18.980 Copy: Not Supported 00:20:18.980 Volatile Write Cache: Present 00:20:18.980 Atomic Write Unit (Normal): 1 00:20:18.980 Atomic Write Unit (PFail): 1 00:20:18.980 Atomic Compare & Write Unit: 1 00:20:18.980 Fused Compare & Write: Not Supported 00:20:18.980 Scatter-Gather List 00:20:18.980 SGL Command Set: Supported 00:20:18.980 SGL Keyed: Not Supported 00:20:18.980 SGL Bit Bucket Descriptor: Not Supported 00:20:18.980 SGL Metadata Pointer: Not Supported 00:20:18.980 Oversized SGL: Not Supported 00:20:18.980 SGL Metadata Address: Not Supported 00:20:18.980 SGL Offset: Supported 00:20:18.980 Transport SGL Data Block: Not Supported 00:20:18.980 Replay Protected Memory Block: Not Supported 00:20:18.980 00:20:18.980 Firmware Slot Information 00:20:18.980 ========================= 00:20:18.980 Active slot: 0 00:20:18.980 00:20:18.980 Asymmetric Namespace Access 00:20:18.980 =========================== 00:20:18.980 Change Count : 0 00:20:18.980 Number of ANA Group Descriptors : 1 00:20:18.980 ANA Group Descriptor : 0 00:20:18.980 ANA Group ID : 1 00:20:18.980 Number of NSID Values : 1 00:20:18.980 Change Count : 0 00:20:18.980 ANA State : 1 00:20:18.980 Namespace Identifier : 1 00:20:18.980 00:20:18.980 Commands Supported and Effects 00:20:18.980 ============================== 00:20:18.980 Admin Commands 00:20:18.980 -------------- 00:20:18.980 Get Log Page (02h): Supported 00:20:18.980 Identify (06h): Supported 00:20:18.980 Abort (08h): Supported 00:20:18.980 Set Features (09h): Supported 00:20:18.980 Get Features (0Ah): Supported 00:20:18.980 Asynchronous Event Request (0Ch): Supported 00:20:18.980 Keep Alive (18h): Supported 00:20:18.980 I/O Commands 00:20:18.980 ------------ 00:20:18.980 Flush (00h): Supported 00:20:18.980 Write (01h): Supported LBA-Change 00:20:18.980 Read (02h): Supported 00:20:18.980 Write Zeroes (08h): Supported LBA-Change 00:20:18.980 Dataset Management (09h): Supported 00:20:18.980 00:20:18.980 Error Log 00:20:18.980 ========= 00:20:18.980 Entry: 0 00:20:18.980 Error Count: 0x3 00:20:18.980 Submission Queue Id: 0x0 00:20:18.980 Command Id: 0x5 00:20:18.980 Phase Bit: 0 00:20:18.980 Status Code: 0x2 00:20:18.980 Status Code Type: 0x0 00:20:18.980 Do Not Retry: 1 00:20:18.980 Error Location: 0x28 00:20:18.980 LBA: 0x0 00:20:18.980 Namespace: 0x0 00:20:18.980 Vendor Log Page: 0x0 00:20:18.980 ----------- 00:20:18.980 Entry: 1 00:20:18.980 Error Count: 0x2 00:20:18.980 Submission Queue Id: 0x0 00:20:18.980 Command Id: 0x5 00:20:18.980 Phase Bit: 0 00:20:18.980 Status Code: 0x2 00:20:18.980 Status Code Type: 0x0 00:20:18.980 Do Not Retry: 1 00:20:18.980 Error Location: 0x28 00:20:18.980 LBA: 0x0 00:20:18.980 Namespace: 0x0 00:20:18.980 Vendor Log Page: 0x0 00:20:18.980 ----------- 00:20:18.980 Entry: 2 00:20:18.980 Error Count: 0x1 00:20:18.980 Submission Queue Id: 0x0 00:20:18.980 Command Id: 0x4 00:20:18.980 Phase Bit: 0 00:20:18.980 Status Code: 0x2 00:20:18.980 Status Code Type: 0x0 00:20:18.980 Do Not Retry: 1 00:20:18.980 Error Location: 0x28 00:20:18.980 LBA: 0x0 00:20:18.980 Namespace: 0x0 00:20:18.980 Vendor Log Page: 0x0 00:20:18.980 00:20:18.980 Number of Queues 00:20:18.980 ================ 00:20:18.980 Number of I/O Submission Queues: 128 00:20:18.980 Number of I/O Completion Queues: 128 00:20:18.980 00:20:18.980 ZNS Specific Controller Data 00:20:18.980 ============================ 00:20:18.980 Zone Append Size Limit: 0 00:20:18.980 00:20:18.980 00:20:18.980 Active Namespaces 00:20:18.980 ================= 00:20:18.980 get_feature(0x05) failed 00:20:18.980 Namespace ID:1 00:20:18.980 Command Set Identifier: NVM (00h) 00:20:18.980 Deallocate: Supported 00:20:18.980 Deallocated/Unwritten Error: Not Supported 00:20:18.980 Deallocated Read Value: Unknown 00:20:18.980 Deallocate in Write Zeroes: Not Supported 00:20:18.980 Deallocated Guard Field: 0xFFFF 00:20:18.980 Flush: Supported 00:20:18.980 Reservation: Not Supported 00:20:18.980 Namespace Sharing Capabilities: Multiple Controllers 00:20:18.980 Size (in LBAs): 1310720 (5GiB) 00:20:18.980 Capacity (in LBAs): 1310720 (5GiB) 00:20:18.980 Utilization (in LBAs): 1310720 (5GiB) 00:20:18.980 UUID: c84cf5e4-cd35-4c3e-858c-0d4e9355f862 00:20:18.980 Thin Provisioning: Not Supported 00:20:18.980 Per-NS Atomic Units: Yes 00:20:18.980 Atomic Boundary Size (Normal): 0 00:20:18.980 Atomic Boundary Size (PFail): 0 00:20:18.980 Atomic Boundary Offset: 0 00:20:18.980 NGUID/EUI64 Never Reused: No 00:20:18.980 ANA group ID: 1 00:20:18.980 Namespace Write Protected: No 00:20:18.980 Number of LBA Formats: 1 00:20:18.980 Current LBA Format: LBA Format #00 00:20:18.980 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:18.980 00:20:18.980 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:18.980 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:18.980 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:19.239 rmmod nvme_tcp 00:20:19.239 rmmod nvme_fabrics 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.239 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.240 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:19.499 11:09:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:20.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:20.326 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:20.326 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:20.326 00:20:20.326 real 0m3.290s 00:20:20.326 user 0m1.194s 00:20:20.326 sys 0m1.463s 00:20:20.326 11:09:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:20.326 11:09:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.326 ************************************ 00:20:20.326 END TEST nvmf_identify_kernel_target 00:20:20.326 ************************************ 00:20:20.326 11:09:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:20.326 11:09:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:20.326 11:09:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:20.326 11:09:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.326 ************************************ 00:20:20.326 START TEST nvmf_auth_host 00:20:20.326 ************************************ 00:20:20.326 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:20.586 * Looking for test storage... 00:20:20.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:20.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.586 --rc genhtml_branch_coverage=1 00:20:20.586 --rc genhtml_function_coverage=1 00:20:20.586 --rc genhtml_legend=1 00:20:20.586 --rc geninfo_all_blocks=1 00:20:20.586 --rc geninfo_unexecuted_blocks=1 00:20:20.586 00:20:20.586 ' 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:20.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.586 --rc genhtml_branch_coverage=1 00:20:20.586 --rc genhtml_function_coverage=1 00:20:20.586 --rc genhtml_legend=1 00:20:20.586 --rc geninfo_all_blocks=1 00:20:20.586 --rc geninfo_unexecuted_blocks=1 00:20:20.586 00:20:20.586 ' 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:20.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.586 --rc genhtml_branch_coverage=1 00:20:20.586 --rc genhtml_function_coverage=1 00:20:20.586 --rc genhtml_legend=1 00:20:20.586 --rc geninfo_all_blocks=1 00:20:20.586 --rc geninfo_unexecuted_blocks=1 00:20:20.586 00:20:20.586 ' 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:20.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.586 --rc genhtml_branch_coverage=1 00:20:20.586 --rc genhtml_function_coverage=1 00:20:20.586 --rc genhtml_legend=1 00:20:20.586 --rc geninfo_all_blocks=1 00:20:20.586 --rc geninfo_unexecuted_blocks=1 00:20:20.586 00:20:20.586 ' 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:20.586 11:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.586 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:20.587 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:20.587 Cannot find device "nvmf_init_br" 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:20.587 Cannot find device "nvmf_init_br2" 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:20.587 Cannot find device "nvmf_tgt_br" 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:20.587 Cannot find device "nvmf_tgt_br2" 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:20.587 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:20.846 Cannot find device "nvmf_init_br" 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:20.847 Cannot find device "nvmf_init_br2" 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:20.847 Cannot find device "nvmf_tgt_br" 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:20.847 Cannot find device "nvmf_tgt_br2" 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:20.847 Cannot find device "nvmf_br" 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:20.847 Cannot find device "nvmf_init_if" 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:20.847 Cannot find device "nvmf_init_if2" 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:20.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:20.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:20.847 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:21.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:20:21.106 00:20:21.106 --- 10.0.0.3 ping statistics --- 00:20:21.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.106 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:21.106 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:21.106 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:20:21.106 00:20:21.106 --- 10.0.0.4 ping statistics --- 00:20:21.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.106 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:21.106 00:20:21.106 --- 10.0.0.1 ping statistics --- 00:20:21.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.106 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:21.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:20:21.106 00:20:21.106 --- 10.0.0.2 ping statistics --- 00:20:21.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.106 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=93638 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 93638 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 93638 ']' 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.106 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.107 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.107 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.107 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c1c436bfa6ca45d005536d4f345952ff 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.NBv 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c1c436bfa6ca45d005536d4f345952ff 0 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c1c436bfa6ca45d005536d4f345952ff 0 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c1c436bfa6ca45d005536d4f345952ff 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:21.366 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.NBv 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.NBv 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.NBv 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=751c6c874019474c265a6a664a03c68ea5dcf89251dd15b9c1f5db477e052bd6 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gGd 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 751c6c874019474c265a6a664a03c68ea5dcf89251dd15b9c1f5db477e052bd6 3 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 751c6c874019474c265a6a664a03c68ea5dcf89251dd15b9c1f5db477e052bd6 3 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=751c6c874019474c265a6a664a03c68ea5dcf89251dd15b9c1f5db477e052bd6 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gGd 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gGd 00:20:21.625 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.gGd 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=65d44d9b823a13b5437af6ae5a5783bc04b11715ed32c1f9 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Cvb 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 65d44d9b823a13b5437af6ae5a5783bc04b11715ed32c1f9 0 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 65d44d9b823a13b5437af6ae5a5783bc04b11715ed32c1f9 0 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=65d44d9b823a13b5437af6ae5a5783bc04b11715ed32c1f9 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:21.626 11:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Cvb 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Cvb 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Cvb 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=560259a7f57799783e71d5712d6b0fe2d2a37cb21d5c2f29 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.oKc 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 560259a7f57799783e71d5712d6b0fe2d2a37cb21d5c2f29 2 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 560259a7f57799783e71d5712d6b0fe2d2a37cb21d5c2f29 2 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=560259a7f57799783e71d5712d6b0fe2d2a37cb21d5c2f29 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.oKc 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.oKc 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.oKc 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2fa856828cc5bc081c233acebb34a2c0 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.YCs 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2fa856828cc5bc081c233acebb34a2c0 1 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2fa856828cc5bc081c233acebb34a2c0 1 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2fa856828cc5bc081c233acebb34a2c0 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:21.626 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.YCs 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.YCs 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.YCs 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6bfb9c1e6666c162ddaa531a1b73a687 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Dso 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6bfb9c1e6666c162ddaa531a1b73a687 1 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6bfb9c1e6666c162ddaa531a1b73a687 1 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6bfb9c1e6666c162ddaa531a1b73a687 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Dso 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Dso 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Dso 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1a1bb54280c73b82760d872763d356b3361e09329047d7c9 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.CBd 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1a1bb54280c73b82760d872763d356b3361e09329047d7c9 2 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1a1bb54280c73b82760d872763d356b3361e09329047d7c9 2 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1a1bb54280c73b82760d872763d356b3361e09329047d7c9 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.CBd 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.CBd 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.CBd 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=975aac9b8ab4d92962ccaec62d767f30 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.WrA 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 975aac9b8ab4d92962ccaec62d767f30 0 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 975aac9b8ab4d92962ccaec62d767f30 0 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=975aac9b8ab4d92962ccaec62d767f30 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.WrA 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.WrA 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.WrA 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f56fc991a6ee64f3d9cef70198f41d8e4d5af60ae3b1724cf16dea4f579c772f 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ZHf 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f56fc991a6ee64f3d9cef70198f41d8e4d5af60ae3b1724cf16dea4f579c772f 3 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f56fc991a6ee64f3d9cef70198f41d8e4d5af60ae3b1724cf16dea4f579c772f 3 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:21.885 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:21.886 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f56fc991a6ee64f3d9cef70198f41d8e4d5af60ae3b1724cf16dea4f579c772f 00:20:21.886 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:21.886 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ZHf 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ZHf 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ZHf 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93638 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 93638 ']' 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:22.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:22.145 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NBv 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.gGd ]] 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gGd 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Cvb 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.oKc ]] 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oKc 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.YCs 00:20:22.404 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Dso ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Dso 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CBd 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.WrA ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.WrA 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ZHf 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:22.405 11:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:22.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:22.923 Waiting for block devices as requested 00:20:22.924 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:22.924 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:23.492 No valid GPT data, bailing 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:23.492 11:09:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:23.751 No valid GPT data, bailing 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:23.751 No valid GPT data, bailing 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:23.751 No valid GPT data, bailing 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:23.751 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 --hostid=61a87890-fef5-4d39-ae0e-c34cd0a177b6 -a 10.0.0.1 -t tcp -s 4420 00:20:23.751 00:20:23.751 Discovery Log Number of Records 2, Generation counter 2 00:20:23.751 =====Discovery Log Entry 0====== 00:20:23.751 trtype: tcp 00:20:23.751 adrfam: ipv4 00:20:23.751 subtype: current discovery subsystem 00:20:23.751 treq: not specified, sq flow control disable supported 00:20:23.752 portid: 1 00:20:23.752 trsvcid: 4420 00:20:23.752 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:23.752 traddr: 10.0.0.1 00:20:23.752 eflags: none 00:20:23.752 sectype: none 00:20:23.752 =====Discovery Log Entry 1====== 00:20:23.752 trtype: tcp 00:20:23.752 adrfam: ipv4 00:20:23.752 subtype: nvme subsystem 00:20:23.752 treq: not specified, sq flow control disable supported 00:20:23.752 portid: 1 00:20:23.752 trsvcid: 4420 00:20:23.752 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:23.752 traddr: 10.0.0.1 00:20:23.752 eflags: none 00:20:23.752 sectype: none 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.752 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.011 nvme0n1 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.011 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:24.281 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.282 nvme0n1 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.282 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.543 nvme0n1 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.543 nvme0n1 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.543 11:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.543 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.543 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.543 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.543 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.803 nvme0n1 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:24.803 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:24.804 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:24.804 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:24.804 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.804 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.063 nvme0n1 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.063 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:25.322 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:25.322 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:25.322 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:25.322 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:25.322 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.322 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.323 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.582 nvme0n1 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.582 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.583 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.583 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.583 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.583 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.583 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.583 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.583 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.583 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.583 11:09:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.583 nvme0n1 00:20:25.583 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.583 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.583 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.583 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.583 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.583 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:25.842 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.843 nvme0n1 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.843 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.103 nvme0n1 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.103 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.362 nvme0n1 00:20:26.362 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.362 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.362 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.362 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.362 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.362 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.362 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.362 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.362 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.362 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.363 11:09:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.930 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.931 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.190 nvme0n1 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.190 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.453 nvme0n1 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.454 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.727 nvme0n1 00:20:27.727 11:09:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.727 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.986 nvme0n1 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:27.986 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.987 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:27.987 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:27.987 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:27.987 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:27.987 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.987 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.246 nvme0n1 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:28.246 11:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:29.624 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:29.624 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:29.624 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:29.624 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.625 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 nvme0n1 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.193 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.452 nvme0n1 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.452 11:09:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.712 nvme0n1 00:20:30.712 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.712 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.712 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.712 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:30.712 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.712 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.972 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.232 nvme0n1 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.232 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.491 nvme0n1 00:20:31.491 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.491 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.491 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:31.492 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.492 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.751 11:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:31.751 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.752 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.320 nvme0n1 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.320 11:09:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.887 nvme0n1 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.887 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.454 nvme0n1 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.454 11:09:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.021 nvme0n1 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.021 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.022 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.593 nvme0n1 00:20:34.593 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.593 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.593 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.593 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.593 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.593 11:09:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.593 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.853 nvme0n1 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.853 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.112 nvme0n1 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:35.112 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.113 nvme0n1 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.113 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.372 nvme0n1 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:35.372 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:35.373 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.373 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.633 nvme0n1 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:35.633 11:09:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.633 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.633 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.633 nvme0n1 00:20:35.633 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.633 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.633 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.633 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.633 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.633 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.894 nvme0n1 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:35.894 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.895 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.155 nvme0n1 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.155 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.415 nvme0n1 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.415 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.675 nvme0n1 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.675 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.676 11:09:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.676 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.935 nvme0n1 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:36.935 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.936 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 nvme0n1 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.195 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.455 nvme0n1 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.455 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.715 nvme0n1 00:20:37.715 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.715 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.715 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.715 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.715 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.715 11:09:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.715 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.973 nvme0n1 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.973 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.231 nvme0n1 00:20:38.231 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.231 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.231 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.231 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.231 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.231 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.231 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.231 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.231 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.231 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.232 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.491 nvme0n1 00:20:38.491 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.750 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.750 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:38.751 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.751 11:09:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.751 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.010 nvme0n1 00:20:39.010 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.010 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.010 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.010 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.010 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.010 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.011 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.584 nvme0n1 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.584 11:09:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.843 nvme0n1 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.843 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.844 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.412 nvme0n1 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:40.412 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.413 11:09:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.981 nvme0n1 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:40.981 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.982 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.549 nvme0n1 00:20:41.549 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.549 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.549 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.549 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.549 11:09:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.549 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.549 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.549 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.549 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.549 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.808 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.808 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.808 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:41.808 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.808 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:41.808 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.808 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.809 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.377 nvme0n1 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.377 11:09:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.947 nvme0n1 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.947 nvme0n1 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:42.947 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.207 nvme0n1 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.207 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.208 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.468 nvme0n1 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:43.468 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.469 nvme0n1 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.469 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.729 11:09:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.729 nvme0n1 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.729 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.990 nvme0n1 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.990 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.250 nvme0n1 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.250 nvme0n1 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.250 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.508 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.509 nvme0n1 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.509 11:09:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.509 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.509 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.767 nvme0n1 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.767 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.768 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.027 nvme0n1 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.027 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 nvme0n1 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:45.286 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:45.287 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.287 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.287 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:45.287 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.287 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:45.287 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:45.287 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:45.287 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.287 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.287 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.546 nvme0n1 00:20:45.546 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.546 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.546 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.546 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.546 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.546 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.546 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.546 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.546 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.546 11:09:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.546 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.806 nvme0n1 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.806 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.066 nvme0n1 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.066 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.634 nvme0n1 00:20:46.634 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.634 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.635 11:09:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.894 nvme0n1 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.894 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.895 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.154 nvme0n1 00:20:47.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.154 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.413 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.673 nvme0n1 00:20:47.673 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.673 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.673 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.673 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.673 11:09:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.673 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.674 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.674 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.674 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.674 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.674 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:47.674 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.674 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.933 nvme0n1 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:47.933 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjNDM2YmZhNmNhNDVkMDA1NTM2ZDRmMzQ1OTUyZmYMFW7B: 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: ]] 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzUxYzZjODc0MDE5NDc0YzI2NWE2YTY2NGEwM2M2OGVhNWRjZjg5MjUxZGQxNWI5YzFmNWRiNDc3ZTA1MmJkNpQljfI=: 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.192 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.193 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.452 nvme0n1 00:20:48.711 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.711 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.711 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.711 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.711 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.711 11:09:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.711 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.280 nvme0n1 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.280 11:09:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.849 nvme0n1 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWExYmI1NDI4MGM3M2I4Mjc2MGQ4NzI3NjNkMzU2YjMzNjFlMDkzMjkwNDdkN2M5IaOGzA==: 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: ]] 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTc1YWFjOWI4YWI0ZDkyOTYyY2NhZWM2MmQ3NjdmMzCbbI5M: 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.849 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.418 nvme0n1 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:50.418 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjU2ZmM5OTFhNmVlNjRmM2Q5Y2VmNzAxOThmNDFkOGU0ZDVhZjYwYWUzYjE3MjRjZjE2ZGVhNGY1NzljNzcyZl5PTWo=: 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.419 11:09:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.989 nvme0n1 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.989 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.989 request: 00:20:50.989 { 00:20:50.989 "name": "nvme0", 00:20:50.989 "trtype": "tcp", 00:20:50.989 "traddr": "10.0.0.1", 00:20:50.989 "adrfam": "ipv4", 00:20:50.989 "trsvcid": "4420", 00:20:50.989 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:50.989 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:50.989 "prchk_reftag": false, 00:20:50.989 "prchk_guard": false, 00:20:50.989 "hdgst": false, 00:20:50.989 "ddgst": false, 00:20:50.989 "allow_unrecognized_csi": false, 00:20:50.990 "method": "bdev_nvme_attach_controller", 00:20:50.990 "req_id": 1 00:20:50.990 } 00:20:50.990 Got JSON-RPC error response 00:20:50.990 response: 00:20:50.990 { 00:20:50.990 "code": -5, 00:20:50.990 "message": "Input/output error" 00:20:50.990 } 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:50.990 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.249 request: 00:20:51.249 { 00:20:51.249 "name": "nvme0", 00:20:51.249 "trtype": "tcp", 00:20:51.249 "traddr": "10.0.0.1", 00:20:51.249 "adrfam": "ipv4", 00:20:51.249 "trsvcid": "4420", 00:20:51.249 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:51.249 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:51.249 "prchk_reftag": false, 00:20:51.249 "prchk_guard": false, 00:20:51.249 "hdgst": false, 00:20:51.249 "ddgst": false, 00:20:51.249 "dhchap_key": "key2", 00:20:51.249 "allow_unrecognized_csi": false, 00:20:51.249 "method": "bdev_nvme_attach_controller", 00:20:51.249 "req_id": 1 00:20:51.249 } 00:20:51.249 Got JSON-RPC error response 00:20:51.249 response: 00:20:51.249 { 00:20:51.249 "code": -5, 00:20:51.249 "message": "Input/output error" 00:20:51.249 } 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.249 request: 00:20:51.249 { 00:20:51.249 "name": "nvme0", 00:20:51.249 "trtype": "tcp", 00:20:51.249 "traddr": "10.0.0.1", 00:20:51.249 "adrfam": "ipv4", 00:20:51.249 "trsvcid": "4420", 00:20:51.249 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:51.249 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:51.249 "prchk_reftag": false, 00:20:51.249 "prchk_guard": false, 00:20:51.249 "hdgst": false, 00:20:51.249 "ddgst": false, 00:20:51.249 "dhchap_key": "key1", 00:20:51.249 "dhchap_ctrlr_key": "ckey2", 00:20:51.249 "allow_unrecognized_csi": false, 00:20:51.249 "method": "bdev_nvme_attach_controller", 00:20:51.249 "req_id": 1 00:20:51.249 } 00:20:51.249 Got JSON-RPC error response 00:20:51.249 response: 00:20:51.249 { 00:20:51.249 "code": -5, 00:20:51.249 "message": "Input/output error" 00:20:51.249 } 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:51.249 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.250 nvme0n1 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.250 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.509 request: 00:20:51.509 { 00:20:51.509 "name": "nvme0", 00:20:51.509 "dhchap_key": "key1", 00:20:51.509 "dhchap_ctrlr_key": "ckey2", 00:20:51.509 "method": "bdev_nvme_set_keys", 00:20:51.509 "req_id": 1 00:20:51.509 } 00:20:51.509 Got JSON-RPC error response 00:20:51.509 response: 00:20:51.509 { 00:20:51.509 "code": -13, 00:20:51.509 "message": "Permission denied" 00:20:51.509 } 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:51.509 11:09:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:52.447 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.447 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:52.447 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.447 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.447 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.447 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:52.447 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:52.447 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.447 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVkNDRkOWI4MjNhMTNiNTQzN2FmNmFlNWE1NzgzYmMwNGIxMTcxNWVkMzJjMWY5la7lEQ==: 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: ]] 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTYwMjU5YTdmNTc3OTk3ODNlNzFkNTcxMmQ2YjBmZTJkMmEzN2NiMjFkNWMyZjI59TzGBw==: 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.705 11:09:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.705 nvme0n1 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmZhODU2ODI4Y2M1YmMwODFjMjMzYWNlYmIzNGEyYzBDZTRQ: 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: ]] 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmJmYjljMWU2NjY2YzE2MmRkYWE1MzFhMWI3M2E2ODdzWrCL: 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.705 request: 00:20:52.705 { 00:20:52.705 "name": "nvme0", 00:20:52.705 "dhchap_key": "key2", 00:20:52.705 "dhchap_ctrlr_key": "ckey1", 00:20:52.705 "method": "bdev_nvme_set_keys", 00:20:52.705 "req_id": 1 00:20:52.705 } 00:20:52.705 Got JSON-RPC error response 00:20:52.705 response: 00:20:52.705 { 00:20:52.705 "code": -13, 00:20:52.705 "message": "Permission denied" 00:20:52.705 } 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:52.705 11:09:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.081 rmmod nvme_tcp 00:20:54.081 rmmod nvme_fabrics 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 93638 ']' 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 93638 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 93638 ']' 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 93638 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 93638 00:20:54.081 killing process with pid 93638 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 93638' 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 93638 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 93638 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:54.081 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:54.339 11:09:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:54.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:55.165 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:55.165 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:55.165 11:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.NBv /tmp/spdk.key-null.Cvb /tmp/spdk.key-sha256.YCs /tmp/spdk.key-sha384.CBd /tmp/spdk.key-sha512.ZHf /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:55.165 11:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:55.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:55.684 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:55.684 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:55.684 00:20:55.684 real 0m35.165s 00:20:55.684 user 0m32.467s 00:20:55.684 sys 0m3.691s 00:20:55.684 11:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:55.684 11:10:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.684 ************************************ 00:20:55.684 END TEST nvmf_auth_host 00:20:55.684 ************************************ 00:20:55.684 11:10:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:55.684 11:10:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:55.684 11:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:55.684 11:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:55.684 11:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.684 ************************************ 00:20:55.684 START TEST nvmf_digest 00:20:55.684 ************************************ 00:20:55.684 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:55.684 * Looking for test storage... 00:20:55.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:55.684 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:55.684 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:20:55.684 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:55.942 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:55.942 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.942 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.942 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.942 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.942 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.942 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.942 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.942 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:55.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.943 --rc genhtml_branch_coverage=1 00:20:55.943 --rc genhtml_function_coverage=1 00:20:55.943 --rc genhtml_legend=1 00:20:55.943 --rc geninfo_all_blocks=1 00:20:55.943 --rc geninfo_unexecuted_blocks=1 00:20:55.943 00:20:55.943 ' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:55.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.943 --rc genhtml_branch_coverage=1 00:20:55.943 --rc genhtml_function_coverage=1 00:20:55.943 --rc genhtml_legend=1 00:20:55.943 --rc geninfo_all_blocks=1 00:20:55.943 --rc geninfo_unexecuted_blocks=1 00:20:55.943 00:20:55.943 ' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:55.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.943 --rc genhtml_branch_coverage=1 00:20:55.943 --rc genhtml_function_coverage=1 00:20:55.943 --rc genhtml_legend=1 00:20:55.943 --rc geninfo_all_blocks=1 00:20:55.943 --rc geninfo_unexecuted_blocks=1 00:20:55.943 00:20:55.943 ' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:55.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.943 --rc genhtml_branch_coverage=1 00:20:55.943 --rc genhtml_function_coverage=1 00:20:55.943 --rc genhtml_legend=1 00:20:55.943 --rc geninfo_all_blocks=1 00:20:55.943 --rc geninfo_unexecuted_blocks=1 00:20:55.943 00:20:55.943 ' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:55.943 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:55.943 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:55.944 Cannot find device "nvmf_init_br" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:55.944 Cannot find device "nvmf_init_br2" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:55.944 Cannot find device "nvmf_tgt_br" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.944 Cannot find device "nvmf_tgt_br2" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:55.944 Cannot find device "nvmf_init_br" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:55.944 Cannot find device "nvmf_init_br2" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:55.944 Cannot find device "nvmf_tgt_br" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:55.944 Cannot find device "nvmf_tgt_br2" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:55.944 Cannot find device "nvmf_br" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:55.944 Cannot find device "nvmf_init_if" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:55.944 Cannot find device "nvmf_init_if2" 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.944 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.944 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:56.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:56.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:20:56.203 00:20:56.203 --- 10.0.0.3 ping statistics --- 00:20:56.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.203 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:56.203 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:56.203 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:20:56.203 00:20:56.203 --- 10.0.0.4 ping statistics --- 00:20:56.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.203 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:56.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:56.203 00:20:56.203 --- 10.0.0.1 ping statistics --- 00:20:56.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.203 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:56.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:20:56.203 00:20:56.203 --- 10.0.0.2 ping statistics --- 00:20:56.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.203 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:56.203 ************************************ 00:20:56.203 START TEST nvmf_digest_clean 00:20:56.203 ************************************ 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=95260 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 95260 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 95260 ']' 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:56.203 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:56.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.204 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.204 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:56.204 11:10:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:56.463 [2024-10-29 11:10:01.756000] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:20:56.463 [2024-10-29 11:10:01.756096] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.463 [2024-10-29 11:10:01.912339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.463 [2024-10-29 11:10:01.936337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.463 [2024-10-29 11:10:01.936412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.463 [2024-10-29 11:10:01.936427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.463 [2024-10-29 11:10:01.936438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.463 [2024-10-29 11:10:01.936447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.463 [2024-10-29 11:10:01.936839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:56.722 [2024-10-29 11:10:02.085465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:56.722 null0 00:20:56.722 [2024-10-29 11:10:02.119718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.722 [2024-10-29 11:10:02.143795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:56.722 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95283 00:20:56.723 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:56.723 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95283 /var/tmp/bperf.sock 00:20:56.723 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 95283 ']' 00:20:56.723 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:56.723 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:56.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:56.723 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:56.723 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:56.723 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:56.723 [2024-10-29 11:10:02.212853] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:20:56.723 [2024-10-29 11:10:02.212968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95283 ] 00:20:56.982 [2024-10-29 11:10:02.366324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.982 [2024-10-29 11:10:02.390229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.982 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:56.982 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:20:56.982 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:56.982 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:56.982 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:57.551 [2024-10-29 11:10:02.753358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:57.551 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.551 11:10:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.810 nvme0n1 00:20:57.810 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:57.810 11:10:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:57.810 Running I/O for 2 seconds... 00:21:00.126 17653.00 IOPS, 68.96 MiB/s [2024-10-29T11:10:05.623Z] 17716.50 IOPS, 69.21 MiB/s 00:21:00.126 Latency(us) 00:21:00.126 [2024-10-29T11:10:05.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.126 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:00.126 nvme0n1 : 2.01 17755.06 69.36 0.00 0.00 7203.79 6613.18 21328.99 00:21:00.126 [2024-10-29T11:10:05.623Z] =================================================================================================================== 00:21:00.126 [2024-10-29T11:10:05.623Z] Total : 17755.06 69.36 0.00 0.00 7203.79 6613.18 21328.99 00:21:00.126 { 00:21:00.126 "results": [ 00:21:00.126 { 00:21:00.126 "job": "nvme0n1", 00:21:00.127 "core_mask": "0x2", 00:21:00.127 "workload": "randread", 00:21:00.127 "status": "finished", 00:21:00.127 "queue_depth": 128, 00:21:00.127 "io_size": 4096, 00:21:00.127 "runtime": 2.010018, 00:21:00.127 "iops": 17755.064880016, 00:21:00.127 "mibps": 69.3557221875625, 00:21:00.127 "io_failed": 0, 00:21:00.127 "io_timeout": 0, 00:21:00.127 "avg_latency_us": 7203.791065191253, 00:21:00.127 "min_latency_us": 6613.178181818182, 00:21:00.127 "max_latency_us": 21328.98909090909 00:21:00.127 } 00:21:00.127 ], 00:21:00.127 "core_count": 1 00:21:00.127 } 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:00.127 | select(.opcode=="crc32c") 00:21:00.127 | "\(.module_name) \(.executed)"' 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95283 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 95283 ']' 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 95283 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95283 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:00.127 killing process with pid 95283 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95283' 00:21:00.127 Received shutdown signal, test time was about 2.000000 seconds 00:21:00.127 00:21:00.127 Latency(us) 00:21:00.127 [2024-10-29T11:10:05.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.127 [2024-10-29T11:10:05.624Z] =================================================================================================================== 00:21:00.127 [2024-10-29T11:10:05.624Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 95283 00:21:00.127 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 95283 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95336 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95336 /var/tmp/bperf.sock 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 95336 ']' 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:00.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:00.387 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:00.387 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:00.387 Zero copy mechanism will not be used. 00:21:00.387 [2024-10-29 11:10:05.730797] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:21:00.387 [2024-10-29 11:10:05.730898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95336 ] 00:21:00.388 [2024-10-29 11:10:05.872275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.647 [2024-10-29 11:10:05.891886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.647 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:00.647 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:21:00.647 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:00.647 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:00.647 11:10:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:00.908 [2024-10-29 11:10:06.205712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:00.908 11:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:00.908 11:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.166 nvme0n1 00:21:01.166 11:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:01.166 11:10:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:01.426 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:01.426 Zero copy mechanism will not be used. 00:21:01.426 Running I/O for 2 seconds... 00:21:03.298 8720.00 IOPS, 1090.00 MiB/s [2024-10-29T11:10:08.795Z] 8752.00 IOPS, 1094.00 MiB/s 00:21:03.298 Latency(us) 00:21:03.298 [2024-10-29T11:10:08.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.298 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:03.298 nvme0n1 : 2.00 8749.17 1093.65 0.00 0.00 1825.99 1645.85 10843.23 00:21:03.298 [2024-10-29T11:10:08.795Z] =================================================================================================================== 00:21:03.298 [2024-10-29T11:10:08.795Z] Total : 8749.17 1093.65 0.00 0.00 1825.99 1645.85 10843.23 00:21:03.298 { 00:21:03.298 "results": [ 00:21:03.298 { 00:21:03.298 "job": "nvme0n1", 00:21:03.298 "core_mask": "0x2", 00:21:03.298 "workload": "randread", 00:21:03.298 "status": "finished", 00:21:03.298 "queue_depth": 16, 00:21:03.298 "io_size": 131072, 00:21:03.298 "runtime": 2.002475, 00:21:03.298 "iops": 8749.17289853806, 00:21:03.298 "mibps": 1093.6466123172574, 00:21:03.298 "io_failed": 0, 00:21:03.298 "io_timeout": 0, 00:21:03.298 "avg_latency_us": 1825.992209215442, 00:21:03.298 "min_latency_us": 1645.8472727272726, 00:21:03.298 "max_latency_us": 10843.229090909092 00:21:03.298 } 00:21:03.298 ], 00:21:03.298 "core_count": 1 00:21:03.298 } 00:21:03.298 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:03.298 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:03.298 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:03.298 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:03.298 | select(.opcode=="crc32c") 00:21:03.298 | "\(.module_name) \(.executed)"' 00:21:03.298 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95336 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 95336 ']' 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 95336 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95336 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:03.558 killing process with pid 95336 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95336' 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 95336 00:21:03.558 Received shutdown signal, test time was about 2.000000 seconds 00:21:03.558 00:21:03.558 Latency(us) 00:21:03.558 [2024-10-29T11:10:09.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.558 [2024-10-29T11:10:09.055Z] =================================================================================================================== 00:21:03.558 [2024-10-29T11:10:09.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:03.558 11:10:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 95336 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95379 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95379 /var/tmp/bperf.sock 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 95379 ']' 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:03.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:03.818 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:03.818 [2024-10-29 11:10:09.150475] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:21:03.818 [2024-10-29 11:10:09.150575] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95379 ] 00:21:03.818 [2024-10-29 11:10:09.288478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.818 [2024-10-29 11:10:09.307462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.077 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:04.077 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:21:04.077 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:04.077 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:04.077 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:04.336 [2024-10-29 11:10:09.590521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:04.336 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:04.336 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:04.595 nvme0n1 00:21:04.595 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:04.595 11:10:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:04.853 Running I/O for 2 seconds... 00:21:06.722 19051.00 IOPS, 74.42 MiB/s [2024-10-29T11:10:12.219Z] 19114.00 IOPS, 74.66 MiB/s 00:21:06.722 Latency(us) 00:21:06.722 [2024-10-29T11:10:12.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.722 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:06.722 nvme0n1 : 2.00 19133.92 74.74 0.00 0.00 6684.45 6255.71 14954.12 00:21:06.722 [2024-10-29T11:10:12.219Z] =================================================================================================================== 00:21:06.722 [2024-10-29T11:10:12.219Z] Total : 19133.92 74.74 0.00 0.00 6684.45 6255.71 14954.12 00:21:06.722 { 00:21:06.722 "results": [ 00:21:06.722 { 00:21:06.722 "job": "nvme0n1", 00:21:06.722 "core_mask": "0x2", 00:21:06.722 "workload": "randwrite", 00:21:06.722 "status": "finished", 00:21:06.722 "queue_depth": 128, 00:21:06.722 "io_size": 4096, 00:21:06.722 "runtime": 2.004608, 00:21:06.722 "iops": 19133.915458782965, 00:21:06.722 "mibps": 74.74185726087096, 00:21:06.722 "io_failed": 0, 00:21:06.722 "io_timeout": 0, 00:21:06.722 "avg_latency_us": 6684.453839721651, 00:21:06.722 "min_latency_us": 6255.709090909091, 00:21:06.722 "max_latency_us": 14954.123636363636 00:21:06.722 } 00:21:06.722 ], 00:21:06.722 "core_count": 1 00:21:06.722 } 00:21:06.722 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:06.722 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:06.722 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:06.722 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:06.722 | select(.opcode=="crc32c") 00:21:06.722 | "\(.module_name) \(.executed)"' 00:21:06.722 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95379 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 95379 ']' 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 95379 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95379 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:06.981 killing process with pid 95379 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95379' 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 95379 00:21:06.981 Received shutdown signal, test time was about 2.000000 seconds 00:21:06.981 00:21:06.981 Latency(us) 00:21:06.981 [2024-10-29T11:10:12.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.981 [2024-10-29T11:10:12.478Z] =================================================================================================================== 00:21:06.981 [2024-10-29T11:10:12.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.981 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 95379 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95432 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95432 /var/tmp/bperf.sock 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 95432 ']' 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:07.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:07.240 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:07.240 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:07.240 Zero copy mechanism will not be used. 00:21:07.240 [2024-10-29 11:10:12.581389] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:21:07.240 [2024-10-29 11:10:12.581487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95432 ] 00:21:07.240 [2024-10-29 11:10:12.719205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.240 [2024-10-29 11:10:12.737931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.499 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:07.499 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:21:07.499 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:07.499 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:07.499 11:10:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:07.757 [2024-10-29 11:10:13.139598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:07.758 11:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:07.758 11:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:08.017 nvme0n1 00:21:08.017 11:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:08.017 11:10:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:08.276 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:08.276 Zero copy mechanism will not be used. 00:21:08.276 Running I/O for 2 seconds... 00:21:10.149 7344.00 IOPS, 918.00 MiB/s [2024-10-29T11:10:15.646Z] 7368.50 IOPS, 921.06 MiB/s 00:21:10.149 Latency(us) 00:21:10.149 [2024-10-29T11:10:15.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.149 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:10.149 nvme0n1 : 2.00 7364.91 920.61 0.00 0.00 2167.60 1653.29 7298.33 00:21:10.149 [2024-10-29T11:10:15.646Z] =================================================================================================================== 00:21:10.149 [2024-10-29T11:10:15.646Z] Total : 7364.91 920.61 0.00 0.00 2167.60 1653.29 7298.33 00:21:10.149 { 00:21:10.149 "results": [ 00:21:10.149 { 00:21:10.149 "job": "nvme0n1", 00:21:10.149 "core_mask": "0x2", 00:21:10.149 "workload": "randwrite", 00:21:10.149 "status": "finished", 00:21:10.149 "queue_depth": 16, 00:21:10.149 "io_size": 131072, 00:21:10.149 "runtime": 2.004097, 00:21:10.149 "iops": 7364.912975769137, 00:21:10.149 "mibps": 920.6141219711421, 00:21:10.149 "io_failed": 0, 00:21:10.149 "io_timeout": 0, 00:21:10.149 "avg_latency_us": 2167.6016358709044, 00:21:10.149 "min_latency_us": 1653.2945454545454, 00:21:10.149 "max_latency_us": 7298.327272727272 00:21:10.149 } 00:21:10.149 ], 00:21:10.149 "core_count": 1 00:21:10.149 } 00:21:10.149 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:10.149 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:10.149 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:10.149 | select(.opcode=="crc32c") 00:21:10.149 | "\(.module_name) \(.executed)"' 00:21:10.149 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:10.149 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95432 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 95432 ']' 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 95432 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95432 00:21:10.717 killing process with pid 95432 00:21:10.717 Received shutdown signal, test time was about 2.000000 seconds 00:21:10.717 00:21:10.717 Latency(us) 00:21:10.717 [2024-10-29T11:10:16.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.717 [2024-10-29T11:10:16.214Z] =================================================================================================================== 00:21:10.717 [2024-10-29T11:10:16.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95432' 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 95432 00:21:10.717 11:10:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 95432 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 95260 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 95260 ']' 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 95260 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95260 00:21:10.717 killing process with pid 95260 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95260' 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 95260 00:21:10.717 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 95260 00:21:10.976 ************************************ 00:21:10.976 END TEST nvmf_digest_clean 00:21:10.976 ************************************ 00:21:10.976 00:21:10.976 real 0m14.556s 00:21:10.976 user 0m28.462s 00:21:10.976 sys 0m4.232s 00:21:10.976 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:10.977 ************************************ 00:21:10.977 START TEST nvmf_digest_error 00:21:10.977 ************************************ 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=95503 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 95503 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 95503 ']' 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:10.977 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:10.977 [2024-10-29 11:10:16.363939] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:21:10.977 [2024-10-29 11:10:16.364032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.236 [2024-10-29 11:10:16.512960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.236 [2024-10-29 11:10:16.532194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.236 [2024-10-29 11:10:16.532261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.236 [2024-10-29 11:10:16.532287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.236 [2024-10-29 11:10:16.532294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.236 [2024-10-29 11:10:16.532300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.236 [2024-10-29 11:10:16.532613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:11.236 [2024-10-29 11:10:16.648991] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.236 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:11.236 [2024-10-29 11:10:16.688077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:11.236 null0 00:21:11.236 [2024-10-29 11:10:16.718901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.495 [2024-10-29 11:10:16.743012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95533 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95533 /var/tmp/bperf.sock 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 95533 ']' 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:11.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:11.495 11:10:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:11.495 [2024-10-29 11:10:16.799492] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:21:11.495 [2024-10-29 11:10:16.799590] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95533 ] 00:21:11.495 [2024-10-29 11:10:16.941748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.495 [2024-10-29 11:10:16.960495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.495 [2024-10-29 11:10:16.987455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:11.754 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:11.754 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:21:11.754 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:11.754 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:12.013 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:12.013 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.013 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.013 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.013 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:12.013 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:12.272 nvme0n1 00:21:12.272 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:12.272 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.272 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:12.272 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.272 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:12.272 11:10:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:12.532 Running I/O for 2 seconds... 00:21:12.532 [2024-10-29 11:10:17.819619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.819676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.819689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.834050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.834096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.834108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.848029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.848075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.848086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.861907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.861953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.861965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.876256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.876302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.876313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.890959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.891005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.891015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.905514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.905557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.905568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.919342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.919408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.919420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.933303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.933346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.933357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.947121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.947165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.947175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.961152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.961197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.961209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.975076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.975120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.975131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:17.989191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:17.989234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:17.989245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:18.003018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:18.003062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:18.003072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.532 [2024-10-29 11:10:18.016925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.532 [2024-10-29 11:10:18.016969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.532 [2024-10-29 11:10:18.016979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.792 [2024-10-29 11:10:18.031768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.792 [2024-10-29 11:10:18.031830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.792 [2024-10-29 11:10:18.031841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.792 [2024-10-29 11:10:18.046426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.792 [2024-10-29 11:10:18.046471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.792 [2024-10-29 11:10:18.046481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.792 [2024-10-29 11:10:18.060428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.792 [2024-10-29 11:10:18.060472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.792 [2024-10-29 11:10:18.060483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.792 [2024-10-29 11:10:18.074480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.792 [2024-10-29 11:10:18.074523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.792 [2024-10-29 11:10:18.074534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.792 [2024-10-29 11:10:18.088503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.792 [2024-10-29 11:10:18.088552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.792 [2024-10-29 11:10:18.088563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.792 [2024-10-29 11:10:18.102501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.792 [2024-10-29 11:10:18.102545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.792 [2024-10-29 11:10:18.102555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.792 [2024-10-29 11:10:18.116372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.792 [2024-10-29 11:10:18.116426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.792 [2024-10-29 11:10:18.116437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.792 [2024-10-29 11:10:18.130493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.792 [2024-10-29 11:10:18.130536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.130546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.144356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.144409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.144421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.158236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.158280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.158291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.172125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.172169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.172179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.186743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.186789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.186800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.201429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.201473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.201484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.215610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.215655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.215665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.229497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.229540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.229551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.243409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.243453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.243464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.257263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.257307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.257318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.271286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.271330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.271340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.793 [2024-10-29 11:10:18.285279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:12.793 [2024-10-29 11:10:18.285324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.793 [2024-10-29 11:10:18.285335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.053 [2024-10-29 11:10:18.300452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.053 [2024-10-29 11:10:18.300499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.053 [2024-10-29 11:10:18.300510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.053 [2024-10-29 11:10:18.314432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.053 [2024-10-29 11:10:18.314476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.053 [2024-10-29 11:10:18.314486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.053 [2024-10-29 11:10:18.328461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.053 [2024-10-29 11:10:18.328504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.053 [2024-10-29 11:10:18.328515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.053 [2024-10-29 11:10:18.342419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.053 [2024-10-29 11:10:18.342462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.053 [2024-10-29 11:10:18.342472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.053 [2024-10-29 11:10:18.356241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.053 [2024-10-29 11:10:18.356285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.053 [2024-10-29 11:10:18.356295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.370153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.370197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.370207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.384162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.384205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.384216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.398063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.398105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.398116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.411933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.411976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.411986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.425903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.425946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.425957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.439874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.439917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.439928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.454074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.454117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.454128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.467957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.468000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.468011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.482008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.482051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.482062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.495915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.495957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.495968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.510109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.510152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.510162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.524008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.524052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.524063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.054 [2024-10-29 11:10:18.538111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.054 [2024-10-29 11:10:18.538155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.054 [2024-10-29 11:10:18.538165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.553199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.553246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.553258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.567701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.567747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.567758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.581853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.581897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.581907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.595836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.595879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.595889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.609670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.609713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.609724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.623510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.623553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.623564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.637542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.637585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.637595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.651433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.651477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.651488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.667818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.667849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.667861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.684013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.684056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.684067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.699224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.699267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.699277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.719186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.719228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.719239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.733179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.733221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.733232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.747072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.314 [2024-10-29 11:10:18.747115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.314 [2024-10-29 11:10:18.747125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.314 [2024-10-29 11:10:18.761711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.315 [2024-10-29 11:10:18.761758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.315 [2024-10-29 11:10:18.761798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.315 [2024-10-29 11:10:18.778177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.315 [2024-10-29 11:10:18.778222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.315 [2024-10-29 11:10:18.778234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.315 [2024-10-29 11:10:18.795511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.315 [2024-10-29 11:10:18.795557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.315 [2024-10-29 11:10:18.795569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.315 17585.00 IOPS, 68.69 MiB/s [2024-10-29T11:10:18.812Z] [2024-10-29 11:10:18.810940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.315 [2024-10-29 11:10:18.810987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.315 [2024-10-29 11:10:18.810999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.574 [2024-10-29 11:10:18.826532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.574 [2024-10-29 11:10:18.826578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.574 [2024-10-29 11:10:18.826589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.574 [2024-10-29 11:10:18.841817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.574 [2024-10-29 11:10:18.841862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.574 [2024-10-29 11:10:18.841873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.574 [2024-10-29 11:10:18.856373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.574 [2024-10-29 11:10:18.856475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.574 [2024-10-29 11:10:18.856487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.574 [2024-10-29 11:10:18.871368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.574 [2024-10-29 11:10:18.871420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.574 [2024-10-29 11:10:18.871431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.574 [2024-10-29 11:10:18.886805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.574 [2024-10-29 11:10:18.886851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.574 [2024-10-29 11:10:18.886862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.574 [2024-10-29 11:10:18.902326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.574 [2024-10-29 11:10:18.902370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.574 [2024-10-29 11:10:18.902381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.574 [2024-10-29 11:10:18.917197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.574 [2024-10-29 11:10:18.917241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.574 [2024-10-29 11:10:18.917252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.574 [2024-10-29 11:10:18.932183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.574 [2024-10-29 11:10:18.932228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.574 [2024-10-29 11:10:18.932239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.575 [2024-10-29 11:10:18.947087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.575 [2024-10-29 11:10:18.947131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.575 [2024-10-29 11:10:18.947142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.575 [2024-10-29 11:10:18.962062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.575 [2024-10-29 11:10:18.962105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.575 [2024-10-29 11:10:18.962115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.575 [2024-10-29 11:10:18.976137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.575 [2024-10-29 11:10:18.976181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.575 [2024-10-29 11:10:18.976192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.575 [2024-10-29 11:10:18.990332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.575 [2024-10-29 11:10:18.990376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.575 [2024-10-29 11:10:18.990394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.575 [2024-10-29 11:10:19.004179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.575 [2024-10-29 11:10:19.004223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.575 [2024-10-29 11:10:19.004234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.575 [2024-10-29 11:10:19.018135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.575 [2024-10-29 11:10:19.018179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.575 [2024-10-29 11:10:19.018192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.575 [2024-10-29 11:10:19.031969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.575 [2024-10-29 11:10:19.032013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.575 [2024-10-29 11:10:19.032023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.575 [2024-10-29 11:10:19.045954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.575 [2024-10-29 11:10:19.045998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.575 [2024-10-29 11:10:19.046009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.575 [2024-10-29 11:10:19.060056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.575 [2024-10-29 11:10:19.060099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.575 [2024-10-29 11:10:19.060109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.074584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.074630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.074641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.089121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.089166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.089177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.103286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.103333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.103344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.118022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.118066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.118078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.132107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.132150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.132161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.146091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.146135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.146146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.159896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.159939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.159949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.174066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.174110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.174121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.188746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.188795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.188807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.203379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.203425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.203435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.217585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.217630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.217640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.231382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.231425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.231435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.245294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.245337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.245347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.259036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.259079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.259090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.273113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.273156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.273166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.287022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.287065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.287076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.301165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.301210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.301221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.314978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.315024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.315035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.835 [2024-10-29 11:10:19.329151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:13.835 [2024-10-29 11:10:19.329226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.835 [2024-10-29 11:10:19.329238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.343967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.344012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.344023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.358075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.358118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.358129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.372620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.372669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.372682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.387520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.387564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.387575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.402248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.402292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.402303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.416183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.416229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.416241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.430121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.430166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.430177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.444040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.444085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.444095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.457931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.457957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.457983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.471821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.471864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.471875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.485805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.485848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.485859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.499745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.499787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.499798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.513675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.513717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.513728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.527755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.527798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.527808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.541719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.541761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.541772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.095 [2024-10-29 11:10:19.555608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.095 [2024-10-29 11:10:19.555650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.095 [2024-10-29 11:10:19.555660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.096 [2024-10-29 11:10:19.569477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.096 [2024-10-29 11:10:19.569519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.096 [2024-10-29 11:10:19.569530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.096 [2024-10-29 11:10:19.583460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.096 [2024-10-29 11:10:19.583503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.096 [2024-10-29 11:10:19.583514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.598571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.598616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.598627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.613086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.613129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.613139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.627005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.627050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.627063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.647249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.647293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.647303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.661317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.661360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.661370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.676033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.676079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.676090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.693291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.693334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.693345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.709096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.709139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.709150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.723256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.723300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.723311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.737684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.737728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.737738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.751717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.751760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.751771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.765769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.765813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.765823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.779883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.779927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.779938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 [2024-10-29 11:10:19.793973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ba7c60) 00:21:14.358 [2024-10-29 11:10:19.794017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.358 [2024-10-29 11:10:19.794028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:14.358 17647.50 IOPS, 68.94 MiB/s 00:21:14.358 Latency(us) 00:21:14.358 [2024-10-29T11:10:19.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.358 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:14.358 nvme0n1 : 2.01 17626.92 68.86 0.00 0.00 7256.72 6672.76 26810.18 00:21:14.358 [2024-10-29T11:10:19.855Z] =================================================================================================================== 00:21:14.358 [2024-10-29T11:10:19.855Z] Total : 17626.92 68.86 0.00 0.00 7256.72 6672.76 26810.18 00:21:14.358 { 00:21:14.358 "results": [ 00:21:14.358 { 00:21:14.358 "job": "nvme0n1", 00:21:14.358 "core_mask": "0x2", 00:21:14.358 "workload": "randread", 00:21:14.358 "status": "finished", 00:21:14.358 "queue_depth": 128, 00:21:14.358 "io_size": 4096, 00:21:14.358 "runtime": 2.009597, 00:21:14.358 "iops": 17626.917237635207, 00:21:14.358 "mibps": 68.85514545951253, 00:21:14.358 "io_failed": 0, 00:21:14.358 "io_timeout": 0, 00:21:14.358 "avg_latency_us": 7256.721946141824, 00:21:14.358 "min_latency_us": 6672.756363636364, 00:21:14.358 "max_latency_us": 26810.18181818182 00:21:14.358 } 00:21:14.358 ], 00:21:14.358 "core_count": 1 00:21:14.358 } 00:21:14.358 11:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:14.358 11:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:14.358 11:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:14.358 | .driver_specific 00:21:14.358 | .nvme_error 00:21:14.358 | .status_code 00:21:14.358 | .command_transient_transport_error' 00:21:14.358 11:10:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:14.927 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:21:14.927 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95533 00:21:14.927 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 95533 ']' 00:21:14.927 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 95533 00:21:14.927 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:21:14.927 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:14.927 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95533 00:21:14.927 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:14.928 killing process with pid 95533 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95533' 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 95533 00:21:14.928 Received shutdown signal, test time was about 2.000000 seconds 00:21:14.928 00:21:14.928 Latency(us) 00:21:14.928 [2024-10-29T11:10:20.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.928 [2024-10-29T11:10:20.425Z] =================================================================================================================== 00:21:14.928 [2024-10-29T11:10:20.425Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 95533 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95580 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95580 /var/tmp/bperf.sock 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 95580 ']' 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:14.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:14.928 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:14.928 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:14.928 Zero copy mechanism will not be used. 00:21:14.928 [2024-10-29 11:10:20.322220] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:21:14.928 [2024-10-29 11:10:20.322309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95580 ] 00:21:15.186 [2024-10-29 11:10:20.455658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.186 [2024-10-29 11:10:20.473836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.187 [2024-10-29 11:10:20.500789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:15.187 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:15.187 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:21:15.187 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:15.187 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:15.446 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:15.446 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.446 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:15.446 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.446 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:15.446 11:10:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:15.720 nvme0n1 00:21:15.720 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:15.720 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.720 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:15.720 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.720 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:15.720 11:10:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:16.003 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:16.003 Zero copy mechanism will not be used. 00:21:16.003 Running I/O for 2 seconds... 00:21:16.003 [2024-10-29 11:10:21.281697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.003 [2024-10-29 11:10:21.281776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.003 [2024-10-29 11:10:21.281791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.003 [2024-10-29 11:10:21.285781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.003 [2024-10-29 11:10:21.285817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.003 [2024-10-29 11:10:21.285846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.003 [2024-10-29 11:10:21.289841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.003 [2024-10-29 11:10:21.289877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.003 [2024-10-29 11:10:21.289906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.003 [2024-10-29 11:10:21.293945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.003 [2024-10-29 11:10:21.293982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.003 [2024-10-29 11:10:21.294010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.003 [2024-10-29 11:10:21.297904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.003 [2024-10-29 11:10:21.297940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.003 [2024-10-29 11:10:21.297969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.003 [2024-10-29 11:10:21.301898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.003 [2024-10-29 11:10:21.301935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.003 [2024-10-29 11:10:21.301963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.003 [2024-10-29 11:10:21.305868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.003 [2024-10-29 11:10:21.305903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.003 [2024-10-29 11:10:21.305932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.003 [2024-10-29 11:10:21.310013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.003 [2024-10-29 11:10:21.310063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.003 [2024-10-29 11:10:21.310081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.003 [2024-10-29 11:10:21.314047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.003 [2024-10-29 11:10:21.314082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.003 [2024-10-29 11:10:21.314110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.003 [2024-10-29 11:10:21.318027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.003 [2024-10-29 11:10:21.318063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.318091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.321985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.322020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.322049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.326158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.326194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.326207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.330132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.330168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.330196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.334171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.334207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.334235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.338193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.338228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.338256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.342471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.342507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.342520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.346358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.346422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.346450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.350311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.350346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.350375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.354231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.354266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.354295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.358461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.358495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.358523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.362330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.362365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.362424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.366303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.366338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.366366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.370371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.370437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.370453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.374584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.374618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.374646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.378567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.378602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.378631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.382712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.382763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.382792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.386680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.386714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.386742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.390563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.390596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.390624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.394459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.394491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.394519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.398319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.398353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.398381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.402180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.402214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.402242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.406133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.406167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.406195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.409992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.410025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.410053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.413867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.413900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.413927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.417655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.417688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.417716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.421540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.421574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.421607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.425872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.425930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.425949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.004 [2024-10-29 11:10:21.430170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.004 [2024-10-29 11:10:21.430210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.004 [2024-10-29 11:10:21.430243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.434518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.434554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.434582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.438512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.438547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.438576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.442539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.442572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.442600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.446314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.446348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.446376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.450126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.450159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.450187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.453997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.454031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.454061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.457850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.457883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.457911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.461715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.461750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.461762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.465564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.465598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.465610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.469450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.469491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.469503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.473200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.473233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.473262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.477143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.477176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.477203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.481069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.481105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.481133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.484951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.484994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.485023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.488829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.488880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.488908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.492749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.492785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.492798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.496557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.496592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.496604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.005 [2024-10-29 11:10:21.500818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.005 [2024-10-29 11:10:21.500889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.005 [2024-10-29 11:10:21.500918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.504999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.505033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.505061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.509186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.509221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.509249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.513139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.513173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.513201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.516993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.517026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.517054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.520794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.520832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.520861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.524652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.524688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.524701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.528451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.528484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.528511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.532228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.532425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.532442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.536222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.536412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.536430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.540277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.540466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.540483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.544350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.544552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.544569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.548346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.548545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.548563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.552373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.552580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.552597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.556468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.556501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.556555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.560343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.560541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.560558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.266 [2024-10-29 11:10:21.564200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.266 [2024-10-29 11:10:21.564402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.266 [2024-10-29 11:10:21.564420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.568310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.568500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.568517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.572370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.572603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.572620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.576447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.576480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.576508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.580363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.580589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.580606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.584590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.584628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.584641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.588288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.588477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.588493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.592480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.592515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.592550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.596291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.596464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.596482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.600347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.600549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.600566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.604321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.604513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.604572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.608491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.608548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.608576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.612354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.612555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.612573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.616390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.616420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.616433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.620254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.620433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.620449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.624322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.624499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.624515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.628232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.628403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.628419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.632427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.632462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.632475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.636270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.636466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.636482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.640315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.640504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.640544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.644467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.644503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.644515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.648263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.648439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.648456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.652395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.652426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.652438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.656166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.656330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.656347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.660190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.660355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.660387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.664227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.664401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.664418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.668280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.668456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.668472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.672282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.672455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.672471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.267 [2024-10-29 11:10:21.676277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.267 [2024-10-29 11:10:21.676446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.267 [2024-10-29 11:10:21.676462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.680361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.680566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.680583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.684579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.684616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.684629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.688342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.688531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.688564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.692438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.692469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.692490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.696404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.696439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.696451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.700147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.700323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.700340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.704136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.704311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.704328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.708086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.708259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.712042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.712215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.712231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.716464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.716501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.716538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.720701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.720739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.720752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.724974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.725009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.725037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.729616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.729660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.729673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.734250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.734285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.734313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.738640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.738677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.738707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.743006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.743040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.743068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.747277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.747309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.747337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.751530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.751566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.751580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.755554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.755589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.755617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.268 [2024-10-29 11:10:21.759848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.268 [2024-10-29 11:10:21.759898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.268 [2024-10-29 11:10:21.759927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.529 [2024-10-29 11:10:21.764279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.529 [2024-10-29 11:10:21.764315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.529 [2024-10-29 11:10:21.764343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.529 [2024-10-29 11:10:21.768195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.529 [2024-10-29 11:10:21.768229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.529 [2024-10-29 11:10:21.768258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.529 [2024-10-29 11:10:21.772344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.529 [2024-10-29 11:10:21.772404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.529 [2024-10-29 11:10:21.772433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.529 [2024-10-29 11:10:21.776358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.529 [2024-10-29 11:10:21.776433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.529 [2024-10-29 11:10:21.776446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.529 [2024-10-29 11:10:21.780331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.529 [2024-10-29 11:10:21.780365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.529 [2024-10-29 11:10:21.780421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.529 [2024-10-29 11:10:21.784225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.529 [2024-10-29 11:10:21.784259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.529 [2024-10-29 11:10:21.784287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.788153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.788188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.788216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.792095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.792128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.792156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.796069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.796105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.796117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.799979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.800014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.800042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.803887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.803922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.803950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.807711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.807744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.807771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.811543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.811576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.811603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.815429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.815463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.815475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.819270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.819303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.819331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.823219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.823253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.823281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.827098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.827131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.827160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.830984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.831016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.831044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.834981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.835014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.835043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.838897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.838929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.838957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.842793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.842827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.842855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.846653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.846687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.846715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.850587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.850621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.850649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.854449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.854482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.854510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.858249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.858430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.858446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.862338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.862528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.862545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.866397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.866431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.866459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.870187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.870363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.870397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.874327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.874521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.874538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.878344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.878531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.878547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.882521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.882571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.882600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.886906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.886992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.887006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.892143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.892176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.892189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.897305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.897342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.530 [2024-10-29 11:10:21.897356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.530 [2024-10-29 11:10:21.902194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.530 [2024-10-29 11:10:21.902229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.902257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.906181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.906216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.906243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.910104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.910154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.910181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.913918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.913968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.913995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.917788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.917837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.917864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.921596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.921645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.921672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.925436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.925494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.925522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.929229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.929277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.929305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.933215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.933265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.933291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.937112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.937160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.937187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.941128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.941178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.941205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.945050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.945098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.945125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.949009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.949042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.949069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.952942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.952991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.953018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.956899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.956933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.956960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.960823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.960889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.960902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.964634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.964669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.964698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.968428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.968461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.968488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.972287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.972322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.972348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.976293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.976328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.976355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.980333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.980368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.980408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.984405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.984438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.984466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.988343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.988388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.988417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.992437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.992471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.992499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:21.996390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:21.996424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:21.996452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:22.000172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:22.000220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:22.000247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:22.004161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:22.004196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:22.004224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:22.008173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:22.008207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.531 [2024-10-29 11:10:22.008234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.531 [2024-10-29 11:10:22.012202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.531 [2024-10-29 11:10:22.012237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.532 [2024-10-29 11:10:22.012264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.532 [2024-10-29 11:10:22.016211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.532 [2024-10-29 11:10:22.016246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.532 [2024-10-29 11:10:22.016273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.532 [2024-10-29 11:10:22.020190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.532 [2024-10-29 11:10:22.020225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.532 [2024-10-29 11:10:22.020252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.532 [2024-10-29 11:10:22.024466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.532 [2024-10-29 11:10:22.024505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.532 [2024-10-29 11:10:22.024559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.792 [2024-10-29 11:10:22.028843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.792 [2024-10-29 11:10:22.028925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.792 [2024-10-29 11:10:22.028966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.792 [2024-10-29 11:10:22.033190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.792 [2024-10-29 11:10:22.033255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.792 [2024-10-29 11:10:22.033282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.792 [2024-10-29 11:10:22.037385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.792 [2024-10-29 11:10:22.037442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.792 [2024-10-29 11:10:22.037469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.792 [2024-10-29 11:10:22.041222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.792 [2024-10-29 11:10:22.041271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.792 [2024-10-29 11:10:22.041299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.792 [2024-10-29 11:10:22.045183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.792 [2024-10-29 11:10:22.045232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.792 [2024-10-29 11:10:22.045259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.792 [2024-10-29 11:10:22.049168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.792 [2024-10-29 11:10:22.049217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.792 [2024-10-29 11:10:22.049244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.053079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.053127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.053154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.056990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.057038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.057066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.060963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.061011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.061037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.064980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.065028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.065056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.068950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.068998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.069025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.072888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.072951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.072978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.076826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.076877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.076905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.080750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.080786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.080814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.084641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.084676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.084704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.088443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.088490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.088517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.092427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.092461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.092488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.096431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.096464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.096492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.100469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.100503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.100555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.104382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.104427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.104454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.108367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.108424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.108436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.112456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.112490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.112518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.116406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.116440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.116467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.120342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.120417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.120432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.124303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.124354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.124381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.128133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.128181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.128208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.132049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.132098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.132126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.136109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.136143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.136171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.140092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.140126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.140154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.144261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.144296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.144324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.148249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.148283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.148311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.152352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.152399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.152427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.156334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.156369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.156409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.160305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.793 [2024-10-29 11:10:22.160340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.793 [2024-10-29 11:10:22.160367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.793 [2024-10-29 11:10:22.164357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.164401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.164428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.168327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.168362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.168402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.172330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.172364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.172402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.176342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.176401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.176414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.180299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.180334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.180361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.184613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.184652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.184665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.188552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.188603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.188630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.192691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.192745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.192758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.196620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.196657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.196685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.200662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.200700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.200714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.204710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.204763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.204776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.208553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.208618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.208646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.212406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.212450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.212461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.216293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.216327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.216354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.220278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.220312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.220339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.224317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.224351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.224378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.228333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.228396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.228410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.232573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.232613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.232628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.236610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.236648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.236676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.240705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.240741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.240754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.244499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.244570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.244612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.248295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.248343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.248370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.252249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.252298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.252325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.256234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.256283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.256311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.260109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.260157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.260183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.263971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.264020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.264047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.267852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.267900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.267927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.794 [2024-10-29 11:10:22.271844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.794 [2024-10-29 11:10:22.271894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.794 [2024-10-29 11:10:22.271921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:16.794 7704.00 IOPS, 963.00 MiB/s [2024-10-29T11:10:22.291Z] [2024-10-29 11:10:22.277183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.795 [2024-10-29 11:10:22.277248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.795 [2024-10-29 11:10:22.277276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:16.795 [2024-10-29 11:10:22.281173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.795 [2024-10-29 11:10:22.281222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.795 [2024-10-29 11:10:22.281249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:16.795 [2024-10-29 11:10:22.285216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.795 [2024-10-29 11:10:22.285265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.795 [2024-10-29 11:10:22.285292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:16.795 [2024-10-29 11:10:22.289539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:16.795 [2024-10-29 11:10:22.289587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.795 [2024-10-29 11:10:22.289615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.293722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.293773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.056 [2024-10-29 11:10:22.293800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.297812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.297861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.056 [2024-10-29 11:10:22.297888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.301741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.301789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.056 [2024-10-29 11:10:22.301816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.305636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.305684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.056 [2024-10-29 11:10:22.305711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.309516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.309564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.056 [2024-10-29 11:10:22.309591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.313418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.313477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.056 [2024-10-29 11:10:22.313506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.317525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.317573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.056 [2024-10-29 11:10:22.317600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.321366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.321424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.056 [2024-10-29 11:10:22.321452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.325248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.325297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.056 [2024-10-29 11:10:22.325324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.329121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.329169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.056 [2024-10-29 11:10:22.329196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.056 [2024-10-29 11:10:22.333140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.056 [2024-10-29 11:10:22.333188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.333215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.337271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.337318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.337346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.341221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.341269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.341296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.345228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.345275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.345302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.349187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.349236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.349263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.353119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.353167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.353195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.357024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.357072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.357099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.360884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.360948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.360975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.364800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.364852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.364883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.368667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.368702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.368730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.372579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.372615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.372627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.376724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.376761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.376774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.381029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.381078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.381105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.385268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.385317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.385344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.389574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.389622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.389651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.394113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.394166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.394194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.398639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.398706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.398719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.403065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.403100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.403129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.407341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.407431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.407447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.411625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.411660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.411688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.415561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.415610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.415638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.419523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.419573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.419600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.423716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.423780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.423808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.427719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.427784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.427812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.431743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.431793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.431821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.435750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.435799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.435826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.439744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.439793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.439821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.443862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.443898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.443925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.057 [2024-10-29 11:10:22.447848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.057 [2024-10-29 11:10:22.447883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.057 [2024-10-29 11:10:22.447910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.451887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.451922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.451950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.455943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.455978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.456006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.460204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.460238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.460266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.464178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.464227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.464255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.468078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.468127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.468154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.472010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.472060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.472087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.475977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.476026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.476054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.480048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.480083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.480111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.484119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.484154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.484181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.488178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.488211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.488238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.492206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.492241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.492268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.496423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.496477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.496505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.500275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.500321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.500348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.504303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.504339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.504367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.508320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.508357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.508396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.512299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.512334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.512361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.516573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.516611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.516624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.520433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.520467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.520494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.524437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.524483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.524497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.528457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.528493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.528505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.532588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.532624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.532652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.536490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.536545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.536574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.540399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.540433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.540460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.544412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.544448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.544461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.548499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.548591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.548604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.058 [2024-10-29 11:10:22.552959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.058 [2024-10-29 11:10:22.553009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.058 [2024-10-29 11:10:22.553036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.557344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.557405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.557435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.561708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.561743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.561771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.565727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.565776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.565803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.569802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.569851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.569878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.573917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.573967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.573994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.578182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.578232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.578259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.582399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.582473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.582501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.586400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.586464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.586491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.590425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.590472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.590500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.594279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.594327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.594355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.598101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.598149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.598175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.601956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.602004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.602030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.605782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.605830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.605857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.609698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.609746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.609773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.613653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.613701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.613727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.617630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.617678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.617705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.621515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.621562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.621588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.625327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.625399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.625413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.629267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.629317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.629344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.633205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.633253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.633280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.637138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.637186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.637229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.641137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.641185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.641212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.645006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.645055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.645082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.648871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.648950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.648977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.652836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.652901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.652928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.656771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.656836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.320 [2024-10-29 11:10:22.656862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.320 [2024-10-29 11:10:22.660675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.320 [2024-10-29 11:10:22.660710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.660738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.664547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.664599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.664612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.668317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.668349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.668377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.672181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.672214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.672241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.676243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.676277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.676290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.680138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.680172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.680200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.684151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.684202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.684215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.688124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.688160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.688172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.692050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.692087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.692098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.695982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.696018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.696029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.699733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.699782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.699809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.703674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.703708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.703735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.707436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.707483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.707510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.711177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.711226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.711253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.715116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.715164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.715190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.718954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.719002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.719029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.722824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.722872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.722899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.726669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.726717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.726744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.730488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.730535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.730562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.734342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.734415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.734429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.738572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.738607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.738636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.742907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.742957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.742969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.747165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.747216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.747244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.751524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.751575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.751603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.756070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.756106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.756134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.760593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.760633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.760647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.764989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.765036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.765063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.769268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.769317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.769343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.321 [2024-10-29 11:10:22.773482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.321 [2024-10-29 11:10:22.773544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.321 [2024-10-29 11:10:22.773572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.322 [2024-10-29 11:10:22.777608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.322 [2024-10-29 11:10:22.777658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.322 [2024-10-29 11:10:22.777686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.322 [2024-10-29 11:10:22.781598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.322 [2024-10-29 11:10:22.781631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.322 [2024-10-29 11:10:22.781659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.322 [2024-10-29 11:10:22.785499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.322 [2024-10-29 11:10:22.785548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.322 [2024-10-29 11:10:22.785576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.322 [2024-10-29 11:10:22.789510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.322 [2024-10-29 11:10:22.789544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.322 [2024-10-29 11:10:22.789571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.322 [2024-10-29 11:10:22.793575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.322 [2024-10-29 11:10:22.793624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.322 [2024-10-29 11:10:22.793651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.322 [2024-10-29 11:10:22.797490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.322 [2024-10-29 11:10:22.797537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.322 [2024-10-29 11:10:22.797564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.322 [2024-10-29 11:10:22.801451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.322 [2024-10-29 11:10:22.801511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.322 [2024-10-29 11:10:22.801539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.322 [2024-10-29 11:10:22.805371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.322 [2024-10-29 11:10:22.805431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.322 [2024-10-29 11:10:22.805458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.322 [2024-10-29 11:10:22.809339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.322 [2024-10-29 11:10:22.809400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.322 [2024-10-29 11:10:22.809414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.322 [2024-10-29 11:10:22.813528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.322 [2024-10-29 11:10:22.813594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.322 [2024-10-29 11:10:22.813607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.582 [2024-10-29 11:10:22.817717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.582 [2024-10-29 11:10:22.817767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.582 [2024-10-29 11:10:22.817809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.582 [2024-10-29 11:10:22.821726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.582 [2024-10-29 11:10:22.821774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.582 [2024-10-29 11:10:22.821801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.582 [2024-10-29 11:10:22.825940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.582 [2024-10-29 11:10:22.825990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.582 [2024-10-29 11:10:22.826018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.582 [2024-10-29 11:10:22.829828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.582 [2024-10-29 11:10:22.829877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.582 [2024-10-29 11:10:22.829904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.582 [2024-10-29 11:10:22.833831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.582 [2024-10-29 11:10:22.833880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.582 [2024-10-29 11:10:22.833907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.582 [2024-10-29 11:10:22.837979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.582 [2024-10-29 11:10:22.838027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.582 [2024-10-29 11:10:22.838054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.582 [2024-10-29 11:10:22.841938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.582 [2024-10-29 11:10:22.841987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.582 [2024-10-29 11:10:22.842014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.582 [2024-10-29 11:10:22.845891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.582 [2024-10-29 11:10:22.845940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.582 [2024-10-29 11:10:22.845967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.582 [2024-10-29 11:10:22.849768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.582 [2024-10-29 11:10:22.849815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.582 [2024-10-29 11:10:22.849842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.582 [2024-10-29 11:10:22.853874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.853913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.853941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.857854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.857905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.857917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.861690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.861739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.861766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.865586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.865634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.865661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.869512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.869546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.869573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.873545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.873593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.873621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.877364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.877424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.877452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.881350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.881411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.881441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.885340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.885414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.885427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.889308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.889342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.889369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.893308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.893357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.893396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.897331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.897405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.897419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.901292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.901341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.901368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.905204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.905253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.905280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.909278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.909327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.909354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.913193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.913242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.913269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.917191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.917239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.917266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.921156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.921205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.921232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.925100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.925148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.925175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.929041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.929091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.929118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.933050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.933097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.933124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.937038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.937085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.937112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.941016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.941065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.941092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.945072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.945120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.945147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.949052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.949101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.949129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.952936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.952985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.953012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.956833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.956899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.956926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.960677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.583 [2024-10-29 11:10:22.960727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.583 [2024-10-29 11:10:22.960739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.583 [2024-10-29 11:10:22.964572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:22.964609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:22.964622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:22.968297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:22.968331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:22.968357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:22.972220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:22.972254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:22.972281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:22.976238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:22.976274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:22.976286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:22.980102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:22.980136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:22.980163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:22.984167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:22.984203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:22.984215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:22.988026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:22.988060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:22.988087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:22.991950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:22.992001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:22.992013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:22.995984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:22.996018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:22.996046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:22.999932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:22.999967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:22.999979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.003807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.003842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.003870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.007700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.007748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.007775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.011641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.011676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.011704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.015419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.015466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.015493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.019213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.019262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.019290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.023075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.023124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.023151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.026930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.026979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.027006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.030857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.030905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.030933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.034769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.034818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.034845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.038879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.038915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.038942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.042821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.042871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.042898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.046695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.046744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.046771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.050536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.050586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.050613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.054334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.054407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.054420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.058156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.058205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.058232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.062043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.062092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.062119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.066039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.066087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.066115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.069983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.070032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.070059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.584 [2024-10-29 11:10:23.073940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.584 [2024-10-29 11:10:23.073989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.584 [2024-10-29 11:10:23.074017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.585 [2024-10-29 11:10:23.078154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.585 [2024-10-29 11:10:23.078206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.585 [2024-10-29 11:10:23.078234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.082448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.082514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.082526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.086612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.086663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.086691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.090610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.090658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.090685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.094490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.094538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.094566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.098344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.098401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.098429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.102271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.102306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.102333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.106251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.106300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.106328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.110153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.110202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.110229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.114114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.114163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.114191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.117979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.118027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.118054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.121857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.121905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.121933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.125717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.125765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.125792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.129570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.129617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.129645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.133485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.133532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.133558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.137308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.137356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.137382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.141207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.141242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.141269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.145283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.145331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.145358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.149217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.149266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.149294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.153139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.153186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.153213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.157005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.157052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.157079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.160816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.160880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.160907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.845 [2024-10-29 11:10:23.164764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.845 [2024-10-29 11:10:23.164801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.845 [2024-10-29 11:10:23.164828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.168757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.168792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.168821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.172745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.172782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.172794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.176622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.176659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.176687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.180596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.180632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.180645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.184638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.184675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.184688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.188496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.188552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.188597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.192588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.192656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.192670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.196718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.196756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.196769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.200644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.200684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.200697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.204599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.204636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.204664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.208672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.208709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.208722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.212471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.212503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.212554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.216492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.216550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.216578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.220414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.220447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.220474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.224366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.224411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.224423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.228335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.228396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.228410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.232375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.232421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.232449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.236360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.236403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.236430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.240314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.240348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.240375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.244250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.244284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.244311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.248300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.248336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.248347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.252162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.252196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.252224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.256204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.256242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.256254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.260213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.260250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.260262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.264077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.264112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.264139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.268028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.268078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.268089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:17.846 [2024-10-29 11:10:23.271866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.271914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.271942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:17.846 7727.00 IOPS, 965.88 MiB/s [2024-10-29T11:10:23.343Z] [2024-10-29 11:10:23.277267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc89700) 00:21:17.846 [2024-10-29 11:10:23.277302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.846 [2024-10-29 11:10:23.277329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:17.846 00:21:17.846 Latency(us) 00:21:17.847 [2024-10-29T11:10:23.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.847 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:17.847 nvme0n1 : 2.00 7724.26 965.53 0.00 0.00 2068.26 1675.64 5362.04 00:21:17.847 [2024-10-29T11:10:23.344Z] =================================================================================================================== 00:21:17.847 [2024-10-29T11:10:23.344Z] Total : 7724.26 965.53 0.00 0.00 2068.26 1675.64 5362.04 00:21:17.847 { 00:21:17.847 "results": [ 00:21:17.847 { 00:21:17.847 "job": "nvme0n1", 00:21:17.847 "core_mask": "0x2", 00:21:17.847 "workload": "randread", 00:21:17.847 "status": "finished", 00:21:17.847 "queue_depth": 16, 00:21:17.847 "io_size": 131072, 00:21:17.847 "runtime": 2.002781, 00:21:17.847 "iops": 7724.259417280272, 00:21:17.847 "mibps": 965.532427160034, 00:21:17.847 "io_failed": 0, 00:21:17.847 "io_timeout": 0, 00:21:17.847 "avg_latency_us": 2068.259759064465, 00:21:17.847 "min_latency_us": 1675.6363636363637, 00:21:17.847 "max_latency_us": 5362.036363636364 00:21:17.847 } 00:21:17.847 ], 00:21:17.847 "core_count": 1 00:21:17.847 } 00:21:17.847 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:17.847 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:17.847 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:17.847 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:17.847 | .driver_specific 00:21:17.847 | .nvme_error 00:21:17.847 | .status_code 00:21:17.847 | .command_transient_transport_error' 00:21:18.106 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 499 > 0 )) 00:21:18.106 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95580 00:21:18.106 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 95580 ']' 00:21:18.106 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 95580 00:21:18.106 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:21:18.106 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:18.106 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95580 00:21:18.365 killing process with pid 95580 00:21:18.365 Received shutdown signal, test time was about 2.000000 seconds 00:21:18.365 00:21:18.365 Latency(us) 00:21:18.365 [2024-10-29T11:10:23.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.365 [2024-10-29T11:10:23.862Z] =================================================================================================================== 00:21:18.365 [2024-10-29T11:10:23.862Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95580' 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 95580 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 95580 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95629 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95629 /var/tmp/bperf.sock 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 95629 ']' 00:21:18.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:18.365 11:10:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:18.365 [2024-10-29 11:10:23.779814] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:21:18.365 [2024-10-29 11:10:23.779908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95629 ] 00:21:18.624 [2024-10-29 11:10:23.928151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.624 [2024-10-29 11:10:23.946430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.624 [2024-10-29 11:10:23.973274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:18.624 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:18.624 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:21:18.624 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:18.624 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:18.883 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:18.883 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.883 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:18.883 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.883 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:18.883 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.143 nvme0n1 00:21:19.143 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:19.143 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.143 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:19.143 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.143 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:19.143 11:10:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:19.402 Running I/O for 2 seconds... 00:21:19.402 [2024-10-29 11:10:24.715771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fef90 00:21:19.402 [2024-10-29 11:10:24.718112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.402 [2024-10-29 11:10:24.718162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.402 [2024-10-29 11:10:24.729663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166feb58 00:21:19.402 [2024-10-29 11:10:24.731837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.402 [2024-10-29 11:10:24.731881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:19.402 [2024-10-29 11:10:24.743201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fe2e8 00:21:19.402 [2024-10-29 11:10:24.745518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.402 [2024-10-29 11:10:24.745562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:19.403 [2024-10-29 11:10:24.756589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fda78 00:21:19.403 [2024-10-29 11:10:24.758755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.403 [2024-10-29 11:10:24.758814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:19.403 [2024-10-29 11:10:24.769942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fd208 00:21:19.403 [2024-10-29 11:10:24.772134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.403 [2024-10-29 11:10:24.772163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:19.403 [2024-10-29 11:10:24.783674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fc998 00:21:19.403 [2024-10-29 11:10:24.786181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.403 [2024-10-29 11:10:24.786210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:19.403 [2024-10-29 11:10:24.799231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fc128 00:21:19.403 [2024-10-29 11:10:24.801957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.403 [2024-10-29 11:10:24.802017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:19.403 [2024-10-29 11:10:24.815247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fb8b8 00:21:19.403 [2024-10-29 11:10:24.817731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.403 [2024-10-29 11:10:24.817810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:19.403 [2024-10-29 11:10:24.831082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fb048 00:21:19.403 [2024-10-29 11:10:24.833602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.403 [2024-10-29 11:10:24.833649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:19.403 [2024-10-29 11:10:24.846788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fa7d8 00:21:19.403 [2024-10-29 11:10:24.849063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.403 [2024-10-29 11:10:24.849107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:19.403 [2024-10-29 11:10:24.861262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f9f68 00:21:19.403 [2024-10-29 11:10:24.863532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.403 [2024-10-29 11:10:24.863561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:19.403 [2024-10-29 11:10:24.875393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f96f8 00:21:19.403 [2024-10-29 11:10:24.877497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.403 [2024-10-29 11:10:24.877541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:19.403 [2024-10-29 11:10:24.889567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f8e88 00:21:19.403 [2024-10-29 11:10:24.891607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.403 [2024-10-29 11:10:24.891635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:19.662 [2024-10-29 11:10:24.904412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f8618 00:21:19.662 [2024-10-29 11:10:24.906662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.662 [2024-10-29 11:10:24.906707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:19.662 [2024-10-29 11:10:24.918664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f7da8 00:21:19.662 [2024-10-29 11:10:24.920681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.662 [2024-10-29 11:10:24.920714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:19.662 [2024-10-29 11:10:24.932984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f7538 00:21:19.662 [2024-10-29 11:10:24.935004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.662 [2024-10-29 11:10:24.935048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:24.947229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f6cc8 00:21:19.663 [2024-10-29 11:10:24.949280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:24.949323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:24.961413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f6458 00:21:19.663 [2024-10-29 11:10:24.963582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:24.963625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:24.977392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f5be8 00:21:19.663 [2024-10-29 11:10:24.979740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:24.979772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:24.992223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f5378 00:21:19.663 [2024-10-29 11:10:24.994216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:24.994260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.006531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f4b08 00:21:19.663 [2024-10-29 11:10:25.008381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.008425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.020284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f4298 00:21:19.663 [2024-10-29 11:10:25.022442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.022480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.034570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f3a28 00:21:19.663 [2024-10-29 11:10:25.036409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.036459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.049082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f31b8 00:21:19.663 [2024-10-29 11:10:25.051025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.051066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.062892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f2948 00:21:19.663 [2024-10-29 11:10:25.064694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.064738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.076098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f20d8 00:21:19.663 [2024-10-29 11:10:25.078051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.078092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.090043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f1868 00:21:19.663 [2024-10-29 11:10:25.091896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.091938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.104012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f0ff8 00:21:19.663 [2024-10-29 11:10:25.105887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.105929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.117494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f0788 00:21:19.663 [2024-10-29 11:10:25.119218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.119260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.130838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166eff18 00:21:19.663 [2024-10-29 11:10:25.132614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.132657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.144679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ef6a8 00:21:19.663 [2024-10-29 11:10:25.146415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.146457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:19.663 [2024-10-29 11:10:25.158143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166eee38 00:21:19.663 [2024-10-29 11:10:25.160052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.663 [2024-10-29 11:10:25.160094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.172307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ee5c8 00:21:19.923 [2024-10-29 11:10:25.174090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.174134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.186156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166edd58 00:21:19.923 [2024-10-29 11:10:25.187888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.187931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.199786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ed4e8 00:21:19.923 [2024-10-29 11:10:25.201512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.201555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.213349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ecc78 00:21:19.923 [2024-10-29 11:10:25.214951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.214994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.226707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ec408 00:21:19.923 [2024-10-29 11:10:25.228329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.228371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.240216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ebb98 00:21:19.923 [2024-10-29 11:10:25.241951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.241992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.253713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166eb328 00:21:19.923 [2024-10-29 11:10:25.255258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.255300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.267005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166eaab8 00:21:19.923 [2024-10-29 11:10:25.268641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.268685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.280268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ea248 00:21:19.923 [2024-10-29 11:10:25.281937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.281979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.293834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e99d8 00:21:19.923 [2024-10-29 11:10:25.295407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.295456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.307150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e9168 00:21:19.923 [2024-10-29 11:10:25.308798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.308827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.320725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e88f8 00:21:19.923 [2024-10-29 11:10:25.322237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.322279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.334103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e8088 00:21:19.923 [2024-10-29 11:10:25.335658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.335701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.347513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e7818 00:21:19.923 [2024-10-29 11:10:25.349076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.349118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.361018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e6fa8 00:21:19.923 [2024-10-29 11:10:25.362465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.362505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.374319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e6738 00:21:19.923 [2024-10-29 11:10:25.375780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.375838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.387838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e5ec8 00:21:19.923 [2024-10-29 11:10:25.389355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.389406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.401284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e5658 00:21:19.923 [2024-10-29 11:10:25.402679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.402722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:19.923 [2024-10-29 11:10:25.414516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e4de8 00:21:19.923 [2024-10-29 11:10:25.415930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:19.923 [2024-10-29 11:10:25.415972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.428958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e4578 00:21:20.183 [2024-10-29 11:10:25.430318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.430362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.442651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e3d08 00:21:20.183 [2024-10-29 11:10:25.444032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.444076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.456053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e3498 00:21:20.183 [2024-10-29 11:10:25.457465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.457535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.469538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e2c28 00:21:20.183 [2024-10-29 11:10:25.470843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.470886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.482744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e23b8 00:21:20.183 [2024-10-29 11:10:25.484099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.484141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.496262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e1b48 00:21:20.183 [2024-10-29 11:10:25.497661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.497703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.509682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e12d8 00:21:20.183 [2024-10-29 11:10:25.510939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.510981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.522983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e0a68 00:21:20.183 [2024-10-29 11:10:25.524270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.524313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.536381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e01f8 00:21:20.183 [2024-10-29 11:10:25.537713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.537755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.549878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166df988 00:21:20.183 [2024-10-29 11:10:25.551095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.551138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.563079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166df118 00:21:20.183 [2024-10-29 11:10:25.564326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.564368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.576421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166de8a8 00:21:20.183 [2024-10-29 11:10:25.577692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.577734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.589838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166de038 00:21:20.183 [2024-10-29 11:10:25.590998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.591041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.608820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166de038 00:21:20.183 [2024-10-29 11:10:25.610976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.611019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.622074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166de8a8 00:21:20.183 [2024-10-29 11:10:25.624278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.624320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.635937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166df118 00:21:20.183 [2024-10-29 11:10:25.638156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.638198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.649543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166df988 00:21:20.183 [2024-10-29 11:10:25.651659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.651702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.662772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e01f8 00:21:20.183 [2024-10-29 11:10:25.664974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.665015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:20.183 [2024-10-29 11:10:25.676118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e0a68 00:21:20.183 [2024-10-29 11:10:25.678530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.183 [2024-10-29 11:10:25.678572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.690626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e12d8 00:21:20.443 [2024-10-29 11:10:25.692787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.692834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:20.443 18218.00 IOPS, 71.16 MiB/s [2024-10-29T11:10:25.940Z] [2024-10-29 11:10:25.704258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e1b48 00:21:20.443 [2024-10-29 11:10:25.706454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.706496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.717742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e23b8 00:21:20.443 [2024-10-29 11:10:25.719857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.719899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.731277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e2c28 00:21:20.443 [2024-10-29 11:10:25.733437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.733479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.744692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e3498 00:21:20.443 [2024-10-29 11:10:25.746744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.746786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.758055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e3d08 00:21:20.443 [2024-10-29 11:10:25.760102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.760143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.771398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e4578 00:21:20.443 [2024-10-29 11:10:25.773474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.773517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.784901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e4de8 00:21:20.443 [2024-10-29 11:10:25.786878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.786920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.798948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e5658 00:21:20.443 [2024-10-29 11:10:25.801035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.801077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.813182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e5ec8 00:21:20.443 [2024-10-29 11:10:25.815343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.815410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.829464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e6738 00:21:20.443 [2024-10-29 11:10:25.831823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.831881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.844671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e6fa8 00:21:20.443 [2024-10-29 11:10:25.846884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.846925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.858953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e7818 00:21:20.443 [2024-10-29 11:10:25.860962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.861007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.872674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e8088 00:21:20.443 [2024-10-29 11:10:25.874665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.874709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.886309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e88f8 00:21:20.443 [2024-10-29 11:10:25.888253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.888296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.899960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e9168 00:21:20.443 [2024-10-29 11:10:25.901937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.901980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.913597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166e99d8 00:21:20.443 [2024-10-29 11:10:25.915427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.915469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.927170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ea248 00:21:20.443 [2024-10-29 11:10:25.929117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.443 [2024-10-29 11:10:25.929159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:20.443 [2024-10-29 11:10:25.941208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166eaab8 00:21:20.703 [2024-10-29 11:10:25.943205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:25.943248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:25.955179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166eb328 00:21:20.703 [2024-10-29 11:10:25.957122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:25.957165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:25.968773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ebb98 00:21:20.703 [2024-10-29 11:10:25.970535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:25.970578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:25.982080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ec408 00:21:20.703 [2024-10-29 11:10:25.983879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:25.983922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:25.995765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ecc78 00:21:20.703 [2024-10-29 11:10:25.997583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:25.997625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.009196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ed4e8 00:21:20.703 [2024-10-29 11:10:26.010958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.011000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.022567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166edd58 00:21:20.703 [2024-10-29 11:10:26.024271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.024313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.036032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ee5c8 00:21:20.703 [2024-10-29 11:10:26.037852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.037909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.050581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166eee38 00:21:20.703 [2024-10-29 11:10:26.052419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.052472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.066369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166ef6a8 00:21:20.703 [2024-10-29 11:10:26.068366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.068437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.081877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166eff18 00:21:20.703 [2024-10-29 11:10:26.083583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.083628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.096333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f0788 00:21:20.703 [2024-10-29 11:10:26.098022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.098066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.110229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f0ff8 00:21:20.703 [2024-10-29 11:10:26.111920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.111964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.124074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f1868 00:21:20.703 [2024-10-29 11:10:26.125782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.125825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.138246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f20d8 00:21:20.703 [2024-10-29 11:10:26.139928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.139971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.152765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f2948 00:21:20.703 [2024-10-29 11:10:26.154364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.154415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.166571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f31b8 00:21:20.703 [2024-10-29 11:10:26.168273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.703 [2024-10-29 11:10:26.168316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:20.703 [2024-10-29 11:10:26.180580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f3a28 00:21:20.703 [2024-10-29 11:10:26.182114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.704 [2024-10-29 11:10:26.182158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:20.704 [2024-10-29 11:10:26.195009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f4298 00:21:20.704 [2024-10-29 11:10:26.196606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.704 [2024-10-29 11:10:26.196651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:20.963 [2024-10-29 11:10:26.210036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f4b08 00:21:20.963 [2024-10-29 11:10:26.211538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.963 [2024-10-29 11:10:26.211582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:20.963 [2024-10-29 11:10:26.224209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f5378 00:21:20.963 [2024-10-29 11:10:26.225856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.963 [2024-10-29 11:10:26.225899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:20.963 [2024-10-29 11:10:26.238394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f5be8 00:21:20.963 [2024-10-29 11:10:26.239897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.963 [2024-10-29 11:10:26.239940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:20.963 [2024-10-29 11:10:26.252671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f6458 00:21:20.963 [2024-10-29 11:10:26.254132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.963 [2024-10-29 11:10:26.254175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:20.963 [2024-10-29 11:10:26.267017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f6cc8 00:21:20.963 [2024-10-29 11:10:26.268518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.963 [2024-10-29 11:10:26.268587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:20.963 [2024-10-29 11:10:26.280624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f7538 00:21:20.963 [2024-10-29 11:10:26.282064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.282105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.294082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f7da8 00:21:20.964 [2024-10-29 11:10:26.295511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.295553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.307554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f8618 00:21:20.964 [2024-10-29 11:10:26.309004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.309046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.321008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f8e88 00:21:20.964 [2024-10-29 11:10:26.322343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.322396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.334264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f96f8 00:21:20.964 [2024-10-29 11:10:26.335657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.335700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.347576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f9f68 00:21:20.964 [2024-10-29 11:10:26.348996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.349038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.360921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fa7d8 00:21:20.964 [2024-10-29 11:10:26.362251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.362293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.374291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fb048 00:21:20.964 [2024-10-29 11:10:26.375576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.375619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.387482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fb8b8 00:21:20.964 [2024-10-29 11:10:26.388846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.388906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.400971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fc128 00:21:20.964 [2024-10-29 11:10:26.402218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.402260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.414221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fc998 00:21:20.964 [2024-10-29 11:10:26.415481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.415522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.427437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fd208 00:21:20.964 [2024-10-29 11:10:26.428743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.428787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.440956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fda78 00:21:20.964 [2024-10-29 11:10:26.442160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.442202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:20.964 [2024-10-29 11:10:26.454255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fe2e8 00:21:20.964 [2024-10-29 11:10:26.455488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:20.964 [2024-10-29 11:10:26.455543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:21.223 [2024-10-29 11:10:26.468483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166feb58 00:21:21.223 [2024-10-29 11:10:26.469779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.223 [2024-10-29 11:10:26.469822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:21.223 [2024-10-29 11:10:26.487229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fef90 00:21:21.223 [2024-10-29 11:10:26.489564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.223 [2024-10-29 11:10:26.489608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.223 [2024-10-29 11:10:26.500771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166feb58 00:21:21.223 [2024-10-29 11:10:26.502957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.223 [2024-10-29 11:10:26.503001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:21.223 [2024-10-29 11:10:26.514181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fe2e8 00:21:21.223 [2024-10-29 11:10:26.516325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.223 [2024-10-29 11:10:26.516367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:21.223 [2024-10-29 11:10:26.527588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fda78 00:21:21.223 [2024-10-29 11:10:26.529815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.223 [2024-10-29 11:10:26.529856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.541015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fd208 00:21:21.224 [2024-10-29 11:10:26.543192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.543235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.554678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fc998 00:21:21.224 [2024-10-29 11:10:26.556900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.556942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.568056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fc128 00:21:21.224 [2024-10-29 11:10:26.570212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.570255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.581627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fb8b8 00:21:21.224 [2024-10-29 11:10:26.583670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.583711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.595028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fb048 00:21:21.224 [2024-10-29 11:10:26.597209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.597251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.608507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166fa7d8 00:21:21.224 [2024-10-29 11:10:26.610592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.610635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.621840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f9f68 00:21:21.224 [2024-10-29 11:10:26.623852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.623894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.635287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f96f8 00:21:21.224 [2024-10-29 11:10:26.637342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.637384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.648757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f8e88 00:21:21.224 [2024-10-29 11:10:26.650752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.650810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.662053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f8618 00:21:21.224 [2024-10-29 11:10:26.664069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.664111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.676028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f7da8 00:21:21.224 [2024-10-29 11:10:26.678110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.678151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:21.224 [2024-10-29 11:10:26.689712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f7538 00:21:21.224 [2024-10-29 11:10:26.691656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.691699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:21.224 18280.50 IOPS, 71.41 MiB/s [2024-10-29T11:10:26.721Z] [2024-10-29 11:10:26.704197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbe6d0) with pdu=0x2000166f6cc8 00:21:21.224 [2024-10-29 11:10:26.706236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:21.224 [2024-10-29 11:10:26.706278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.224 00:21:21.224 Latency(us) 00:21:21.224 [2024-10-29T11:10:26.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.224 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:21.224 nvme0n1 : 2.01 18322.17 71.57 0.00 0.00 6980.18 2398.02 25856.93 00:21:21.224 [2024-10-29T11:10:26.721Z] =================================================================================================================== 00:21:21.224 [2024-10-29T11:10:26.721Z] Total : 18322.17 71.57 0.00 0.00 6980.18 2398.02 25856.93 00:21:21.224 { 00:21:21.224 "results": [ 00:21:21.224 { 00:21:21.224 "job": "nvme0n1", 00:21:21.224 "core_mask": "0x2", 00:21:21.224 "workload": "randwrite", 00:21:21.224 "status": "finished", 00:21:21.224 "queue_depth": 128, 00:21:21.224 "io_size": 4096, 00:21:21.224 "runtime": 2.009369, 00:21:21.224 "iops": 18322.16979559255, 00:21:21.224 "mibps": 71.57097576403339, 00:21:21.224 "io_failed": 0, 00:21:21.224 "io_timeout": 0, 00:21:21.224 "avg_latency_us": 6980.17988700565, 00:21:21.224 "min_latency_us": 2398.021818181818, 00:21:21.224 "max_latency_us": 25856.93090909091 00:21:21.224 } 00:21:21.224 ], 00:21:21.224 "core_count": 1 00:21:21.224 } 00:21:21.483 11:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:21.483 11:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:21.483 11:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:21.483 | .driver_specific 00:21:21.483 | .nvme_error 00:21:21.483 | .status_code 00:21:21.483 | .command_transient_transport_error' 00:21:21.483 11:10:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:21.742 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:21:21.742 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95629 00:21:21.742 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 95629 ']' 00:21:21.742 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 95629 00:21:21.742 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:21:21.742 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:21.742 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95629 00:21:21.742 killing process with pid 95629 00:21:21.742 Received shutdown signal, test time was about 2.000000 seconds 00:21:21.742 00:21:21.742 Latency(us) 00:21:21.742 [2024-10-29T11:10:27.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.742 [2024-10-29T11:10:27.240Z] =================================================================================================================== 00:21:21.743 [2024-10-29T11:10:27.240Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95629' 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 95629 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 95629 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95679 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95679 /var/tmp/bperf.sock 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 95679 ']' 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:21.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:21.743 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:22.002 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:22.002 Zero copy mechanism will not be used. 00:21:22.002 [2024-10-29 11:10:27.254155] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:21:22.002 [2024-10-29 11:10:27.254254] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95679 ] 00:21:22.002 [2024-10-29 11:10:27.401297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.002 [2024-10-29 11:10:27.420117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.002 [2024-10-29 11:10:27.447627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:22.002 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:22.002 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:21:22.002 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:22.002 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:22.570 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:22.570 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.570 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:22.570 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.570 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:22.570 11:10:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:22.570 nvme0n1 00:21:22.830 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:22.830 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.830 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:22.830 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.830 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:22.830 11:10:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:22.830 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:22.830 Zero copy mechanism will not be used. 00:21:22.830 Running I/O for 2 seconds... 00:21:22.830 [2024-10-29 11:10:28.225147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.225435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.225463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.229812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.230095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.230123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.234534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.234815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.234841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.239199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.239492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.239517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.243962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.244210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.244235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.248710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.249042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.249068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.253540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.253807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.253832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.258144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.258436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.258456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.262881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.263173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.263192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.267538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.267808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.267832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.271985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.272250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.272274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.276660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.276977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.277001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.830 [2024-10-29 11:10:28.281367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.830 [2024-10-29 11:10:28.281637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.830 [2024-10-29 11:10:28.281660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.831 [2024-10-29 11:10:28.285926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.831 [2024-10-29 11:10:28.286188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.831 [2024-10-29 11:10:28.286212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.831 [2024-10-29 11:10:28.290536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.831 [2024-10-29 11:10:28.290796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.831 [2024-10-29 11:10:28.290820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.831 [2024-10-29 11:10:28.294952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.831 [2024-10-29 11:10:28.295210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.831 [2024-10-29 11:10:28.295235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.831 [2024-10-29 11:10:28.299573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.831 [2024-10-29 11:10:28.299856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.831 [2024-10-29 11:10:28.299880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.831 [2024-10-29 11:10:28.304070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.831 [2024-10-29 11:10:28.304331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.831 [2024-10-29 11:10:28.304355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.831 [2024-10-29 11:10:28.308520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.831 [2024-10-29 11:10:28.308819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.831 [2024-10-29 11:10:28.308844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:22.831 [2024-10-29 11:10:28.313232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.831 [2024-10-29 11:10:28.313533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.831 [2024-10-29 11:10:28.313558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.831 [2024-10-29 11:10:28.317896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.831 [2024-10-29 11:10:28.318156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.831 [2024-10-29 11:10:28.318179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.831 [2024-10-29 11:10:28.322439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.831 [2024-10-29 11:10:28.322699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.831 [2024-10-29 11:10:28.322723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:22.831 [2024-10-29 11:10:28.327273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:22.831 [2024-10-29 11:10:28.327583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.831 [2024-10-29 11:10:28.327609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.332059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.332358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.332426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.337007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.337273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.337298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.341660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.341920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.341944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.346238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.346524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.346548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.351049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.351295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.351320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.355613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.355893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.355917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.360261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.360601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.360628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.365035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.365296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.365320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.369551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.369810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.369833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.374037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.374297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.374321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.378616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.378875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.378898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.383129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.383428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.383448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.387714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.388006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.388041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.392421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.392729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.392754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.396905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.397180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.397204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.401560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.401819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.401843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.406002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.406262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.406286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.410610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.410870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.410893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.415077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.415341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.415365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.419573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.419852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.419875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.424096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.424355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.424388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.428725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.429042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.429065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.433425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.433695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.433719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.437961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.438220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.438243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.442529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.442791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.442815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.446981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.092 [2024-10-29 11:10:28.447239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.092 [2024-10-29 11:10:28.447263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.092 [2024-10-29 11:10:28.451613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.451892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.451916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.456178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.456455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.456479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.460840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.461136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.461160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.465490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.465749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.465772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.469864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.470123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.470146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.474455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.474714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.474738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.478893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.479155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.479174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.483434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.483717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.483752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.488112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.488359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.488391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.492881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.493127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.493150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.497948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.498212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.498237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.502939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.503208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.503232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.507831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.508149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.508170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.513082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.513371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.513409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.518149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.518504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.518526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.523352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.523708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.523763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.528427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.528778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.528819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.533451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.533742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.533780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.538251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.538569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.538594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.543181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.543494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.543519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.548219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.548502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.548549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.553180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.553464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.553498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.557833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.558085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.558110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.562482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.562746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.562770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.567143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.567465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.567498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.571885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.572137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.572177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.576615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.576912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.576951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.581306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.581603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.093 [2024-10-29 11:10:28.581627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.093 [2024-10-29 11:10:28.586042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.093 [2024-10-29 11:10:28.586308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.094 [2024-10-29 11:10:28.586333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.591162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.591456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.591482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.596013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.596315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.596340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.600843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.601141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.601166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.605599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.605882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.605906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.610412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.610662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.610685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.615007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.615271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.615295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.619613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.619877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.619901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.624209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.624503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.624550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.628951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.629250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.629275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.633727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.633992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.634015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.638277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.638595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.638619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.354 [2024-10-29 11:10:28.643038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.354 [2024-10-29 11:10:28.643303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.354 [2024-10-29 11:10:28.643327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.647689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.647971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.647996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.652515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.652817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.652857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.657188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.657470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.657505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.661889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.662158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.662183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.666537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.666820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.666844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.671311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.671576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.671600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.675903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.676167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.676191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.680515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.680833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.680858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.685250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.685560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.685585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.690064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.690329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.690353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.694845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.695109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.695133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.699477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.699745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.699769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.704049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.704315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.704339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.708709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.708999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.709023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.713529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.713793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.713817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.718083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.718347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.718396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.722791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.723054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.723078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.727369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.727647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.727670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.732129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.732389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.732425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.737014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.737274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.737298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.741851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.742116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.742140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.746568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.746863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.746886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.751256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.751547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.751567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.755813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.756106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.756125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.760510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.760814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.760868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.765224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.765516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.765535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.769853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.770128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.770163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.774398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.355 [2024-10-29 11:10:28.774664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.355 [2024-10-29 11:10:28.774688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.355 [2024-10-29 11:10:28.778928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.779187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.779211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.783622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.783891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.783915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.788299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.788616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.788641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.793219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.793502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.793536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.797877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.798135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.798159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.802547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.802828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.802851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.807071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.807330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.807353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.811602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.811860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.811883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.816084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.816343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.816367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.820644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.820932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.820955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.825340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.825620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.825643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.829862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.830124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.830147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.834477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.834741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.834764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.838929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.839187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.839211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.843525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.843786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.843809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.356 [2024-10-29 11:10:28.848116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.356 [2024-10-29 11:10:28.848456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.356 [2024-10-29 11:10:28.848493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.853400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.853671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.853696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.858207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.858573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.858603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.863081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.863343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.863369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.867630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.867891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.867915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.872244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.872582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.872609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.877161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.877426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.877463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.882019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.882300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.882324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.887039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.887346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.887402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.892257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.892650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.892677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.897947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.898273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.898299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.903130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.903441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.903480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.908170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.908496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.908518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.913256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.913582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.913608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.918280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.918601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.918627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.923204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.923533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.923558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.928230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.928606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.928643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.933223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.933565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.933590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.938080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.938339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.938363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.942899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.943165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.943189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.947472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.947731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.947754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.951949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.952207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.952231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.956452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.956769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.956794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.617 [2024-10-29 11:10:28.961141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.617 [2024-10-29 11:10:28.961400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.617 [2024-10-29 11:10:28.961450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:28.965721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:28.966012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:28.966047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:28.970252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:28.970544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:28.970564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:28.974903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:28.975178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:28.975213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:28.979523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:28.979782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:28.979805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:28.984019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:28.984277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:28.984296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:28.988511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:28.988834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:28.988869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:28.993252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:28.993570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:28.993594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:28.997915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:28.998173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:28.998196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.002564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.002822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.002845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.007035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.007293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.007317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.011569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.011830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.011853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.015994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.016256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.016280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.020508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.020805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.020829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.025209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.025522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.025546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.029894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.030159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.030183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.034581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.034844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.034867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.039035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.039293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.039317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.043668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.043927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.043951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.048150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.048434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.048458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.052849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.053157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.053181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.057447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.057705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.057729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.061910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.062173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.062196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.066474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.066742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.066766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.070934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.071192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.071216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.075430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.075691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.075715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.079960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.080222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.080246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.084483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.084787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.084812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.089205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.618 [2024-10-29 11:10:29.089522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.618 [2024-10-29 11:10:29.089547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.618 [2024-10-29 11:10:29.093893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.619 [2024-10-29 11:10:29.094151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.619 [2024-10-29 11:10:29.094174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.619 [2024-10-29 11:10:29.098457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.619 [2024-10-29 11:10:29.098725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.619 [2024-10-29 11:10:29.098748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.619 [2024-10-29 11:10:29.103060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.619 [2024-10-29 11:10:29.103319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.619 [2024-10-29 11:10:29.103342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.619 [2024-10-29 11:10:29.107675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.619 [2024-10-29 11:10:29.107933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.619 [2024-10-29 11:10:29.107967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.619 [2024-10-29 11:10:29.112388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.619 [2024-10-29 11:10:29.112739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.619 [2024-10-29 11:10:29.112766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.879 [2024-10-29 11:10:29.117435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.879 [2024-10-29 11:10:29.117708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.879 [2024-10-29 11:10:29.117732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.879 [2024-10-29 11:10:29.122160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.879 [2024-10-29 11:10:29.122464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.879 [2024-10-29 11:10:29.122489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.879 [2024-10-29 11:10:29.126885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.879 [2024-10-29 11:10:29.127129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.879 [2024-10-29 11:10:29.127153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.879 [2024-10-29 11:10:29.131408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.879 [2024-10-29 11:10:29.131667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.879 [2024-10-29 11:10:29.131690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.879 [2024-10-29 11:10:29.136088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.879 [2024-10-29 11:10:29.136358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.879 [2024-10-29 11:10:29.136390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.879 [2024-10-29 11:10:29.140777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.879 [2024-10-29 11:10:29.141095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.879 [2024-10-29 11:10:29.141119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.879 [2024-10-29 11:10:29.145472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.879 [2024-10-29 11:10:29.145737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.879 [2024-10-29 11:10:29.145760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.879 [2024-10-29 11:10:29.150161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.150461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.150486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.154794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.155053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.155076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.159282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.159574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.159599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.163776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.164034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.164058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.168332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.168631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.168671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.173014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.173285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.173309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.177658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.177917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.177940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.182244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.182551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.182573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.186960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.187235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.187269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.191560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.191818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.191841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.196227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.196584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.196612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.201078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.201343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.201368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.205787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.206045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.206069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.210397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.210659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.210683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.214905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.215163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.215187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.880 6577.00 IOPS, 822.12 MiB/s [2024-10-29T11:10:29.377Z] [2024-10-29 11:10:29.220317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.220630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.220656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.224997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.225242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.225266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.229615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.229878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.229898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.234288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.234606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.234627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.239001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.239267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.239291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.243601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.243882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.243905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.248285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.248593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.248627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.253063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.253322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.253346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.257636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.257897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.257921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.262230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.262519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.262543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.266857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.267114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.267138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.271365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.271658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.271681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.275877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.880 [2024-10-29 11:10:29.276139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.880 [2024-10-29 11:10:29.276162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.880 [2024-10-29 11:10:29.280606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.280926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.280949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.285239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.285527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.285561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.289843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.290118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.290142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.294436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.294698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.294722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.298881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.299140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.299164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.303415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.303680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.303703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.307867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.308125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.308149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.312610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.312889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.312927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.317327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.317616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.317640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.321872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.322130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.322154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.326480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.326738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.326762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.330934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.331196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.331220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.335448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.335712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.335736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.339914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.340172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.340196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.344383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.344714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.344739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.348990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.349248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.349272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.353605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.353863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.353886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.358193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.358482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.358506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.362766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.363024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.363047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.367299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.367587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.367610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:23.881 [2024-10-29 11:10:29.372251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:23.881 [2024-10-29 11:10:29.372621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:23.881 [2024-10-29 11:10:29.372653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.377448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.377769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.377794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.382130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.382433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.382468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.386965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.387230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.387255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.391700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.391961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.391985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.396334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.396685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.396712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.401166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.401443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.401478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.405874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.406121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.406145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.410446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.410704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.410727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.414950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.415207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.415231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.419445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.419702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.419725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.424023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.424308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.424333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.429013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.429285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.429309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.433825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.434076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.434100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.438418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.438676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.438698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.442960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.443222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.443245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.447593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.447852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.447875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.452259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.452575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.142 [2024-10-29 11:10:29.452601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.142 [2024-10-29 11:10:29.456995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.142 [2024-10-29 11:10:29.457264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.457288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.461632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.461891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.461914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.466202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.466506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.466530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.470911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.471167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.471191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.475540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.475801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.475824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.480007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.480265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.480288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.484516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.484812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.484836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.489242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.489533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.489553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.493862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.494139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.494174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.498349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.498641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.498664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.502946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.503208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.503232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.507603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.507868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.507892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.512129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.512373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.512407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.516726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.517053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.517076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.521378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.521683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.521702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.526024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.526276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.526315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.530591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.530871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.530894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.535238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.535515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.535538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.539960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.540210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.540235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.544708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.545022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.545045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.549364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.549643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.549667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.554001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.554260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.554284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.558623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.558901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.558925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.563246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.563544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.563568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.567919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.568185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.568210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.572450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.572772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.572798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.577151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.577410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.577443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.581781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.582039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.582063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.586413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.143 [2024-10-29 11:10:29.586680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.143 [2024-10-29 11:10:29.586704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.143 [2024-10-29 11:10:29.590996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.591258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.591283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.144 [2024-10-29 11:10:29.595562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.595822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.595845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.144 [2024-10-29 11:10:29.600256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.600635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.600663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.144 [2024-10-29 11:10:29.605025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.605291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.605316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.144 [2024-10-29 11:10:29.609658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.609919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.609942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.144 [2024-10-29 11:10:29.614163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.614468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.614492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.144 [2024-10-29 11:10:29.618842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.619108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.619132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.144 [2024-10-29 11:10:29.623463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.623729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.623753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.144 [2024-10-29 11:10:29.627905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.628167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.628190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.144 [2024-10-29 11:10:29.632494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.632822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.632846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.144 [2024-10-29 11:10:29.637435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.144 [2024-10-29 11:10:29.637728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.144 [2024-10-29 11:10:29.637753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.642288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.642578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.642603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.647112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.647357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.647407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.651742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.652001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.652025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.656386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.656706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.656732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.661157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.661431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.661464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.665726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.665986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.666009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.670184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.670492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.670516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.674876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.675144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.675169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.679446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.679705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.679728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.683911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.684172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.684195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.688558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.688830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.688869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.693233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.693520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.693540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.697747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.698021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.698056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.702364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.702655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.702680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.706901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.707158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.707182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.711493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.711750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.711774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.716033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.716299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.716323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.720691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.721022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.721046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.725376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.725658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.725681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.729913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.730173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.730198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.734372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.734664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.734689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.739293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.739592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.739616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.744062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.744330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.403 [2024-10-29 11:10:29.744354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.403 [2024-10-29 11:10:29.749084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.403 [2024-10-29 11:10:29.749334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.749358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.754242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.754559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.754585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.759468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.759803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.759842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.764625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.764989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.765014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.769841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.770108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.770132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.774761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.775024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.775049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.779551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.779853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.779876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.784337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.784720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.784746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.789350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.789641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.789664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.794035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.794286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.794309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.798672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.798940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.798964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.803326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.803625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.803649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.808164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.808418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.808453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.812886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.813156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.813195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.817540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.817804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.817827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.822135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.822413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.822437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.826696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.826960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.826984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.831524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.831815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.831839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.836220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.836500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.836530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.840876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.841177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.841201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.845597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.845863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.845886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.850404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.850698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.850738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.855077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.855329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.855368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.859741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.860024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.860047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.864327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.864654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.864679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.869014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.869281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.869305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.873837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.874104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.874128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.878446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.878730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.878753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.883091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.404 [2024-10-29 11:10:29.883360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.404 [2024-10-29 11:10:29.883408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.404 [2024-10-29 11:10:29.887754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.405 [2024-10-29 11:10:29.888036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.405 [2024-10-29 11:10:29.888060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.405 [2024-10-29 11:10:29.892744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.405 [2024-10-29 11:10:29.893029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.405 [2024-10-29 11:10:29.893052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.405 [2024-10-29 11:10:29.897337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.405 [2024-10-29 11:10:29.897678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.405 [2024-10-29 11:10:29.897704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.902649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.902949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.902975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.907842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.908117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.908142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.913330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.913684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.913711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.918792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.919072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.919097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.924051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.924318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.924342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.929239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.929591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.929619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.934683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.935019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.935058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.939807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.940071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.940095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.944882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.945164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.945187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.949732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.949996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.950020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.954587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.954943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.954967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.959379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.959665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.959689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.964049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.664 [2024-10-29 11:10:29.964314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.664 [2024-10-29 11:10:29.964338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.664 [2024-10-29 11:10:29.968734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:29.969033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:29.969056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:29.973367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:29.973655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:29.973679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:29.978247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:29.978578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:29.978603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:29.983080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:29.983339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:29.983362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:29.987840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:29.988106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:29.988129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:29.992778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:29.993101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:29.993125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:29.997454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:29.997722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:29.997745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.002004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.002262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.002281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.007201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.007528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.007569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.012023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.012303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.012329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.017176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.017477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.017503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.022504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.022806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.022835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.027528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.027816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.027840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.032708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.033051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.033077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.037539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.037820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.037844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.042339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.042623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.042662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.046939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.047199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.047223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.051637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.051895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.051918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.056133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.056402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.056426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.060822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.061121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.061146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.065526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.065790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.065813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.070084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.070344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.070367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.074664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.074942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.074965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.079567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.079850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.079873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.084405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.084734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.084760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.089216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.089522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.089546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.093783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.094058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.094094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.098236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.098540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.098591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.665 [2024-10-29 11:10:30.102969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.665 [2024-10-29 11:10:30.103243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.665 [2024-10-29 11:10:30.103277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.107513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.107787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.107822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.112043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.112319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.112354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.116602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.116934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.116959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.121219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.121510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.121533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.125766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.126039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.126074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.130246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.130551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.130575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.134758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.135032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.135056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.139273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.139563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.139586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.143919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.144177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.144202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.148550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.148896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.148920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.153274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.153563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.153587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.666 [2024-10-29 11:10:30.157952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.666 [2024-10-29 11:10:30.158232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.666 [2024-10-29 11:10:30.158259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.163076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.163358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.163393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.167848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.168152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.168178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.172663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.173024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.173051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.177338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.177616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.177640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.182015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.182275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.182299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.186770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.187042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.187066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.191356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.191640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.191663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.195895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.196152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.196176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.200452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.200772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.200797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.205286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.205558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.205580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.209894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.210145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.210168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.926 [2024-10-29 11:10:30.214570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcbea10) with pdu=0x2000166fef90 00:21:24.926 [2024-10-29 11:10:30.214849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.926 [2024-10-29 11:10:30.214872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.926 6576.00 IOPS, 822.00 MiB/s 00:21:24.926 Latency(us) 00:21:24.926 [2024-10-29T11:10:30.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.926 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:24.926 nvme0n1 : 2.00 6574.49 821.81 0.00 0.00 2428.43 1630.95 7149.38 00:21:24.926 [2024-10-29T11:10:30.423Z] =================================================================================================================== 00:21:24.926 [2024-10-29T11:10:30.423Z] Total : 6574.49 821.81 0.00 0.00 2428.43 1630.95 7149.38 00:21:24.926 { 00:21:24.926 "results": [ 00:21:24.926 { 00:21:24.926 "job": "nvme0n1", 00:21:24.926 "core_mask": "0x2", 00:21:24.926 "workload": "randwrite", 00:21:24.926 "status": "finished", 00:21:24.926 "queue_depth": 16, 00:21:24.926 "io_size": 131072, 00:21:24.926 "runtime": 2.002894, 00:21:24.926 "iops": 6574.48671771946, 00:21:24.926 "mibps": 821.8108397149325, 00:21:24.926 "io_failed": 0, 00:21:24.926 "io_timeout": 0, 00:21:24.926 "avg_latency_us": 2428.4284988401632, 00:21:24.926 "min_latency_us": 1630.9527272727273, 00:21:24.926 "max_latency_us": 7149.381818181818 00:21:24.926 } 00:21:24.926 ], 00:21:24.926 "core_count": 1 00:21:24.926 } 00:21:24.926 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:24.926 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:24.926 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:24.926 | .driver_specific 00:21:24.926 | .nvme_error 00:21:24.926 | .status_code 00:21:24.926 | .command_transient_transport_error' 00:21:24.926 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 424 > 0 )) 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95679 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 95679 ']' 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 95679 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95679 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:25.185 killing process with pid 95679 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95679' 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 95679 00:21:25.185 Received shutdown signal, test time was about 2.000000 seconds 00:21:25.185 00:21:25.185 Latency(us) 00:21:25.185 [2024-10-29T11:10:30.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.185 [2024-10-29T11:10:30.682Z] =================================================================================================================== 00:21:25.185 [2024-10-29T11:10:30.682Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 95679 00:21:25.185 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95503 00:21:25.186 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 95503 ']' 00:21:25.186 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 95503 00:21:25.186 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:21:25.186 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:25.186 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95503 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:25.457 killing process with pid 95503 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95503' 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 95503 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 95503 00:21:25.457 00:21:25.457 real 0m14.516s 00:21:25.457 user 0m28.365s 00:21:25.457 sys 0m4.297s 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:25.457 ************************************ 00:21:25.457 END TEST nvmf_digest_error 00:21:25.457 ************************************ 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.457 rmmod nvme_tcp 00:21:25.457 rmmod nvme_fabrics 00:21:25.457 rmmod nvme_keyring 00:21:25.457 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 95503 ']' 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 95503 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 95503 ']' 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 95503 00:21:25.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (95503) - No such process 00:21:25.768 Process with pid 95503 is not found 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 95503 is not found' 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:25.768 11:10:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:25.768 00:21:25.768 real 0m30.167s 00:21:25.768 user 0m57.128s 00:21:25.768 sys 0m8.961s 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:25.768 ************************************ 00:21:25.768 END TEST nvmf_digest 00:21:25.768 ************************************ 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.768 ************************************ 00:21:25.768 START TEST nvmf_host_multipath 00:21:25.768 ************************************ 00:21:25.768 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:26.028 * Looking for test storage... 00:21:26.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.028 --rc genhtml_branch_coverage=1 00:21:26.028 --rc genhtml_function_coverage=1 00:21:26.028 --rc genhtml_legend=1 00:21:26.028 --rc geninfo_all_blocks=1 00:21:26.028 --rc geninfo_unexecuted_blocks=1 00:21:26.028 00:21:26.028 ' 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.028 --rc genhtml_branch_coverage=1 00:21:26.028 --rc genhtml_function_coverage=1 00:21:26.028 --rc genhtml_legend=1 00:21:26.028 --rc geninfo_all_blocks=1 00:21:26.028 --rc geninfo_unexecuted_blocks=1 00:21:26.028 00:21:26.028 ' 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.028 --rc genhtml_branch_coverage=1 00:21:26.028 --rc genhtml_function_coverage=1 00:21:26.028 --rc genhtml_legend=1 00:21:26.028 --rc geninfo_all_blocks=1 00:21:26.028 --rc geninfo_unexecuted_blocks=1 00:21:26.028 00:21:26.028 ' 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:26.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.028 --rc genhtml_branch_coverage=1 00:21:26.028 --rc genhtml_function_coverage=1 00:21:26.028 --rc genhtml_legend=1 00:21:26.028 --rc geninfo_all_blocks=1 00:21:26.028 --rc geninfo_unexecuted_blocks=1 00:21:26.028 00:21:26.028 ' 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.028 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:26.029 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:26.029 Cannot find device "nvmf_init_br" 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:26.029 Cannot find device "nvmf_init_br2" 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:26.029 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:26.288 Cannot find device "nvmf_tgt_br" 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:26.288 Cannot find device "nvmf_tgt_br2" 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:26.288 Cannot find device "nvmf_init_br" 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:26.288 Cannot find device "nvmf_init_br2" 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:26.288 Cannot find device "nvmf_tgt_br" 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:26.288 Cannot find device "nvmf_tgt_br2" 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:26.288 Cannot find device "nvmf_br" 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:26.288 Cannot find device "nvmf_init_if" 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:26.288 Cannot find device "nvmf_init_if2" 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:26.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:26.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:26.288 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:26.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:26.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:21:26.548 00:21:26.548 --- 10.0.0.3 ping statistics --- 00:21:26.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.548 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:26.548 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:26.548 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:21:26.548 00:21:26.548 --- 10.0.0.4 ping statistics --- 00:21:26.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.548 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:26.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:26.548 00:21:26.548 --- 10.0.0.1 ping statistics --- 00:21:26.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.548 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:26.548 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:26.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:21:26.548 00:21:26.549 --- 10.0.0.2 ping statistics --- 00:21:26.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.549 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=95981 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 95981 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 95981 ']' 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:26.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:26.549 11:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:26.549 [2024-10-29 11:10:31.976169] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:21:26.549 [2024-10-29 11:10:31.976254] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.808 [2024-10-29 11:10:32.130260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:26.809 [2024-10-29 11:10:32.154126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.809 [2024-10-29 11:10:32.154196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.809 [2024-10-29 11:10:32.154210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.809 [2024-10-29 11:10:32.154220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.809 [2024-10-29 11:10:32.154229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.809 [2024-10-29 11:10:32.155115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.809 [2024-10-29 11:10:32.155131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.809 [2024-10-29 11:10:32.188162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:26.809 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:26.809 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:21:26.809 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.809 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:26.809 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:26.809 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.809 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95981 00:21:26.809 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:27.068 [2024-10-29 11:10:32.494941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.068 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:27.328 Malloc0 00:21:27.328 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:27.587 11:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:27.846 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:28.105 [2024-10-29 11:10:33.399709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:28.105 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:28.365 [2024-10-29 11:10:33.659884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:28.365 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96029 00:21:28.365 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:28.365 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:28.365 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96029 /var/tmp/bdevperf.sock 00:21:28.365 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 96029 ']' 00:21:28.365 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.365 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:28.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.365 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.365 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:28.365 11:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:29.303 11:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:29.303 11:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:21:29.303 11:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:29.562 11:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:29.820 Nvme0n1 00:21:29.820 11:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:30.080 Nvme0n1 00:21:30.080 11:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:30.080 11:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:31.019 11:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:31.019 11:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:31.587 11:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:31.587 11:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:31.587 11:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95981 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:31.587 11:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96070 00:21:31.587 11:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:38.154 Attaching 4 probes... 00:21:38.154 @path[10.0.0.3, 4421]: 15113 00:21:38.154 @path[10.0.0.3, 4421]: 15670 00:21:38.154 @path[10.0.0.3, 4421]: 18369 00:21:38.154 @path[10.0.0.3, 4421]: 20595 00:21:38.154 @path[10.0.0.3, 4421]: 20656 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96070 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:38.154 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:38.414 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:38.414 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96189 00:21:38.414 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95981 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:38.414 11:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:44.980 11:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:44.980 11:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:44.980 Attaching 4 probes... 00:21:44.980 @path[10.0.0.3, 4420]: 20403 00:21:44.980 @path[10.0.0.3, 4420]: 20945 00:21:44.980 @path[10.0.0.3, 4420]: 20666 00:21:44.980 @path[10.0.0.3, 4420]: 20731 00:21:44.980 @path[10.0.0.3, 4420]: 20839 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96189 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:44.980 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:45.239 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:45.239 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96307 00:21:45.239 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95981 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:45.239 11:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:51.835 11:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:51.835 11:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:51.835 11:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:51.835 11:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:51.835 Attaching 4 probes... 00:21:51.835 @path[10.0.0.3, 4421]: 14722 00:21:51.835 @path[10.0.0.3, 4421]: 20349 00:21:51.835 @path[10.0.0.3, 4421]: 20588 00:21:51.835 @path[10.0.0.3, 4421]: 20577 00:21:51.835 @path[10.0.0.3, 4421]: 20391 00:21:51.835 11:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:51.835 11:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:51.835 11:10:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:51.835 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:51.835 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:51.835 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:51.835 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96307 00:21:51.835 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:51.835 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:51.835 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:51.835 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:52.094 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:52.094 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96422 00:21:52.094 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95981 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:52.094 11:10:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:58.660 Attaching 4 probes... 00:21:58.660 00:21:58.660 00:21:58.660 00:21:58.660 00:21:58.660 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96422 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:58.660 11:11:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:58.660 11:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:58.919 11:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:58.919 11:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95981 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:58.919 11:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96535 00:21:58.919 11:11:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:05.484 Attaching 4 probes... 00:22:05.484 @path[10.0.0.3, 4421]: 19634 00:22:05.484 @path[10.0.0.3, 4421]: 19899 00:22:05.484 @path[10.0.0.3, 4421]: 19810 00:22:05.484 @path[10.0.0.3, 4421]: 19952 00:22:05.484 @path[10.0.0.3, 4421]: 19971 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96535 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:05.484 11:11:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:06.422 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:06.422 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96653 00:22:06.422 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95981 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:06.422 11:11:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:12.991 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:12.991 11:11:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:12.991 Attaching 4 probes... 00:22:12.991 @path[10.0.0.3, 4420]: 19547 00:22:12.991 @path[10.0.0.3, 4420]: 19835 00:22:12.991 @path[10.0.0.3, 4420]: 19546 00:22:12.991 @path[10.0.0.3, 4420]: 19763 00:22:12.991 @path[10.0.0.3, 4420]: 19663 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96653 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:12.991 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:12.992 [2024-10-29 11:11:18.436713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:12.992 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:13.250 11:11:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:19.827 11:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:19.827 11:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96833 00:22:19.827 11:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95981 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:19.827 11:11:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:26.403 Attaching 4 probes... 00:22:26.403 @path[10.0.0.3, 4421]: 19490 00:22:26.403 @path[10.0.0.3, 4421]: 19745 00:22:26.403 @path[10.0.0.3, 4421]: 19748 00:22:26.403 @path[10.0.0.3, 4421]: 19849 00:22:26.403 @path[10.0.0.3, 4421]: 19897 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96833 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96029 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 96029 ']' 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 96029 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:26.403 11:11:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96029 00:22:26.403 killing process with pid 96029 00:22:26.403 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:26.403 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:26.403 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96029' 00:22:26.403 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 96029 00:22:26.403 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 96029 00:22:26.403 { 00:22:26.403 "results": [ 00:22:26.403 { 00:22:26.403 "job": "Nvme0n1", 00:22:26.403 "core_mask": "0x4", 00:22:26.403 "workload": "verify", 00:22:26.403 "status": "terminated", 00:22:26.403 "verify_range": { 00:22:26.403 "start": 0, 00:22:26.403 "length": 16384 00:22:26.403 }, 00:22:26.403 "queue_depth": 128, 00:22:26.403 "io_size": 4096, 00:22:26.403 "runtime": 55.432089, 00:22:26.403 "iops": 8392.918405077608, 00:22:26.403 "mibps": 32.784837519834404, 00:22:26.403 "io_failed": 0, 00:22:26.403 "io_timeout": 0, 00:22:26.403 "avg_latency_us": 15221.294486426958, 00:22:26.403 "min_latency_us": 160.11636363636364, 00:22:26.403 "max_latency_us": 7015926.69090909 00:22:26.403 } 00:22:26.403 ], 00:22:26.403 "core_count": 1 00:22:26.403 } 00:22:26.403 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96029 00:22:26.403 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:26.403 [2024-10-29 11:10:33.736779] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:22:26.403 [2024-10-29 11:10:33.736922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96029 ] 00:22:26.403 [2024-10-29 11:10:33.891964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.403 [2024-10-29 11:10:33.915979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.403 [2024-10-29 11:10:33.948478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:26.403 Running I/O for 90 seconds... 00:22:26.403 7957.00 IOPS, 31.08 MiB/s [2024-10-29T11:11:31.900Z] 7903.00 IOPS, 30.87 MiB/s [2024-10-29T11:11:31.900Z] 7871.33 IOPS, 30.75 MiB/s [2024-10-29T11:11:31.900Z] 7855.75 IOPS, 30.69 MiB/s [2024-10-29T11:11:31.900Z] 8117.20 IOPS, 31.71 MiB/s [2024-10-29T11:11:31.900Z] 8482.33 IOPS, 33.13 MiB/s [2024-10-29T11:11:31.900Z] 8746.57 IOPS, 34.17 MiB/s [2024-10-29T11:11:31.900Z] 8919.25 IOPS, 34.84 MiB/s [2024-10-29T11:11:31.900Z] [2024-10-29 11:10:43.837729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.837785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.837848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.837866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.837886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.837899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.837918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.837931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.837949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.837961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.837979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.837992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.838023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.838054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.838335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.838406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.838442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.403 [2024-10-29 11:10:43.838473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.403 [2024-10-29 11:10:43.838504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.403 [2024-10-29 11:10:43.838551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.403 [2024-10-29 11:10:43.838583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.403 [2024-10-29 11:10:43.838615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:26.403 [2024-10-29 11:10:43.838633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.838682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.838715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.838747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.838780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.838812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.838854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.838888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.838920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.838953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.838986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.838999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.839063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.839707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.839741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.839775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.839823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.839855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.839896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.839930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.839979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.839997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.404 [2024-10-29 11:10:43.840011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.840029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.840043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.840062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.840075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.840094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.840108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.840126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.840140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.840158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.840172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.840191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.840204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:26.404 [2024-10-29 11:10:43.840868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.404 [2024-10-29 11:10:43.840910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.840935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.840949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.840969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.840994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.841029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.841061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.841093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.841125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.841158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.841964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.841990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.842004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.842029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.842043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.842062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.842076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.842095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.842108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.842128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.842141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.842160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.842173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.842193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.842206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.842225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.842238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.842267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.405 [2024-10-29 11:10:43.842281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.842301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.842314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:26.405 [2024-10-29 11:10:43.842334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.405 [2024-10-29 11:10:43.842347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.842836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:43.842868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:43.842901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:43.842934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:43.842966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.842985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:43.842999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:43.843031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:43.843064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:43.843103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.843137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.843169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.843202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.843235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.843268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.843300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.843333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:43.843353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.406 [2024-10-29 11:10:43.843367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:26.406 9016.56 IOPS, 35.22 MiB/s [2024-10-29T11:11:31.903Z] 9156.90 IOPS, 35.77 MiB/s [2024-10-29T11:11:31.903Z] 9281.91 IOPS, 36.26 MiB/s [2024-10-29T11:11:31.903Z] 9370.92 IOPS, 36.61 MiB/s [2024-10-29T11:11:31.903Z] 9447.62 IOPS, 36.90 MiB/s [2024-10-29T11:11:31.903Z] 9514.64 IOPS, 37.17 MiB/s [2024-10-29T11:11:31.903Z] [2024-10-29 11:10:50.369135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:50.369222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:50.369284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:50.369304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:50.369324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:50.369371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:50.369446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:50.369466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:50.369488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:50.369503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:50.369525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:50.369541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:50.369562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:50.369578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:50.369599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:50.369614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:50.369636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:50.369651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:26.406 [2024-10-29 11:10:50.369673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.406 [2024-10-29 11:10:50.369688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.369711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.369740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.369776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.369807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.369842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.369871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.369891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.369905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.369924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.369938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.369958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.369980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.407 [2024-10-29 11:10:50.370603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.370664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.370704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.370743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.370781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.370818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.370870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.370922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.370957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.370986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.371002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.371022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.371037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.371075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.371105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.371126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.407 [2024-10-29 11:10:50.371140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:26.407 [2024-10-29 11:10:50.371161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.371316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.371352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.371404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.371441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.371500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.371537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.371573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.371610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.371980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.371994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.372441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.372484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.372523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.372573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.372611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.372648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.372686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.408 [2024-10-29 11:10:50.372724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.408 [2024-10-29 11:10:50.372762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:26.408 [2024-10-29 11:10:50.372784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.372799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.372835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.372865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.372901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.372915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.372935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.372950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.372977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.372993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.373443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.409 [2024-10-29 11:10:50.373502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.409 [2024-10-29 11:10:50.373539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.409 [2024-10-29 11:10:50.373575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.409 [2024-10-29 11:10:50.373612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.409 [2024-10-29 11:10:50.373648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.409 [2024-10-29 11:10:50.373684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.373705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.409 [2024-10-29 11:10:50.373720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.409 [2024-10-29 11:10:50.374508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.374559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.374613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.374657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.374711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.374757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.374815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.374856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.374931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.374973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.374999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.375013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.375040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.375054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.375080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.375094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.375120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.375135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.375161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.375175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.375201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.375216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.375245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.409 [2024-10-29 11:10:50.375267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:26.409 [2024-10-29 11:10:50.375295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:50.375310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:50.375336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:50.375350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:26.410 9400.87 IOPS, 36.72 MiB/s [2024-10-29T11:11:31.907Z] 8956.25 IOPS, 34.99 MiB/s [2024-10-29T11:11:31.907Z] 9027.65 IOPS, 35.26 MiB/s [2024-10-29T11:11:31.907Z] 9095.89 IOPS, 35.53 MiB/s [2024-10-29T11:11:31.907Z] 9155.26 IOPS, 35.76 MiB/s [2024-10-29T11:11:31.907Z] 9209.10 IOPS, 35.97 MiB/s [2024-10-29T11:11:31.907Z] 9254.95 IOPS, 36.15 MiB/s [2024-10-29T11:11:31.907Z] [2024-10-29 11:10:57.553993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.554055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.554140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.554173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.554205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.554236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.554266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.554297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.554328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.554957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.554970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.555138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.555172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.555205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.555237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.555269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.555302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.555334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.410 [2024-10-29 11:10:57.555366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.555432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.555489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.555526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.555561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.555595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.410 [2024-10-29 11:10:57.555630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:26.410 [2024-10-29 11:10:57.555650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.555664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.555684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.555698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.555718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.555748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.555782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.555795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.555814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.555828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.555847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.555860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.555879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.555892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.555912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.555925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.555950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.555964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.555984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.555997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.411 [2024-10-29 11:10:57.556613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.556649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.556684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.556720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.556755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.556790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.556870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.556909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.556943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.556975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.556994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.411 [2024-10-29 11:10:57.557007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:26.411 [2024-10-29 11:10:57.557026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.557039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.557071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.557103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.557135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.557167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.557200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.557832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.557865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.557897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.557930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.557962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.557981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.558001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.558022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.558035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.558054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.558067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.558785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.412 [2024-10-29 11:10:57.558812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.558842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.558857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.558882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.558896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.558920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.558933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.558969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.558983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.559008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.559021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.559046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.559059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.559084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.559098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.559136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.559153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.559179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.559193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.559218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.559232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.559257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.412 [2024-10-29 11:10:57.559270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:26.412 [2024-10-29 11:10:57.559294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:10:57.559308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:10:57.559332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:10:57.559348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:10:57.559373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:10:57.559416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:10:57.559445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:10:57.559459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:10:57.559488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:10:57.559510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:10:57.559537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:10:57.559551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:10:57.559577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:10:57.559591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:10:57.559616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:10:57.559629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:10:57.559655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:10:57.559669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:26.413 9265.00 IOPS, 36.19 MiB/s [2024-10-29T11:11:31.910Z] 8862.17 IOPS, 34.62 MiB/s [2024-10-29T11:11:31.910Z] 8492.92 IOPS, 33.18 MiB/s [2024-10-29T11:11:31.910Z] 8153.20 IOPS, 31.85 MiB/s [2024-10-29T11:11:31.910Z] 7839.62 IOPS, 30.62 MiB/s [2024-10-29T11:11:31.910Z] 7549.26 IOPS, 29.49 MiB/s [2024-10-29T11:11:31.910Z] 7279.64 IOPS, 28.44 MiB/s [2024-10-29T11:11:31.910Z] 7045.24 IOPS, 27.52 MiB/s [2024-10-29T11:11:31.910Z] 7139.47 IOPS, 27.89 MiB/s [2024-10-29T11:11:31.910Z] 7230.19 IOPS, 28.24 MiB/s [2024-10-29T11:11:31.910Z] 7314.25 IOPS, 28.57 MiB/s [2024-10-29T11:11:31.910Z] 7394.18 IOPS, 28.88 MiB/s [2024-10-29T11:11:31.910Z] 7472.24 IOPS, 29.19 MiB/s [2024-10-29T11:11:31.910Z] 7537.37 IOPS, 29.44 MiB/s [2024-10-29T11:11:31.910Z] [2024-10-29 11:11:10.809760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.809809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.809873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.809891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.809912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.809925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.809944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.809957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.809975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.809987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.810018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.810066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.810097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.413 [2024-10-29 11:11:10.810641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.810706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.810734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.810760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.810800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.810825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.413 [2024-10-29 11:11:10.810838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.413 [2024-10-29 11:11:10.810850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.810863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.810875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.810897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.810910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.810924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.810936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.810949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.810961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.810974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.810986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.414 [2024-10-29 11:11:10.811769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.811976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.811988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.414 [2024-10-29 11:11:10.812001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.414 [2024-10-29 11:11:10.812014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.812246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.812278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.812304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.812330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.812356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.812382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.812423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.812463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:26.415 [2024-10-29 11:11:10.812927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.812953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.812978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.812992] nvme_qpair.c: 243:nvme_io_qpair_print_command: 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.415 *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.813009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.813023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.813036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.415 [2024-10-29 11:11:10.813050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.415 [2024-10-29 11:11:10.813064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.416 [2024-10-29 11:11:10.813090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.416 [2024-10-29 11:11:10.813116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1148320 is same with the state(6) to be set 00:22:26.416 [2024-10-29 11:11:10.813146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29816 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30272 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30280 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30288 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30296 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30304 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30312 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30320 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30328 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30336 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30344 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30352 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30360 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30368 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30376 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30384 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.813863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:26.416 [2024-10-29 11:11:10.813872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:26.416 [2024-10-29 11:11:10.813881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30392 len:8 PRP1 0x0 PRP2 0x0 00:22:26.416 [2024-10-29 11:11:10.813893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.814009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.416 [2024-10-29 11:11:10.814032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.814046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.416 [2024-10-29 11:11:10.814059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.814071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.416 [2024-10-29 11:11:10.814083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.814096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:26.416 [2024-10-29 11:11:10.814107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.814121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:26.416 [2024-10-29 11:11:10.814133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:26.416 [2024-10-29 11:11:10.814160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c4e0 is same with the state(6) to be set 00:22:26.416 [2024-10-29 11:11:10.815133] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:26.416 [2024-10-29 11:11:10.815169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112c4e0 (9): Bad file descriptor 00:22:26.416 [2024-10-29 11:11:10.815544] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:26.416 [2024-10-29 11:11:10.815577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112c4e0 with addr=10.0.0.3, port=4421 00:22:26.416 [2024-10-29 11:11:10.815593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112c4e0 is same with the state(6) to be set 00:22:26.416 [2024-10-29 11:11:10.815657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112c4e0 (9): Bad file descriptor 00:22:26.416 [2024-10-29 11:11:10.815692] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:26.416 [2024-10-29 11:11:10.815706] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:26.416 [2024-10-29 11:11:10.815719] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:26.416 [2024-10-29 11:11:10.815750] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:26.416 [2024-10-29 11:11:10.815768] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:26.416 7599.81 IOPS, 29.69 MiB/s [2024-10-29T11:11:31.914Z] 7655.16 IOPS, 29.90 MiB/s [2024-10-29T11:11:31.914Z] 7712.66 IOPS, 30.13 MiB/s [2024-10-29T11:11:31.914Z] 7768.23 IOPS, 30.34 MiB/s [2024-10-29T11:11:31.914Z] 7820.82 IOPS, 30.55 MiB/s [2024-10-29T11:11:31.914Z] 7871.05 IOPS, 30.75 MiB/s [2024-10-29T11:11:31.914Z] 7917.74 IOPS, 30.93 MiB/s [2024-10-29T11:11:31.914Z] 7958.16 IOPS, 31.09 MiB/s [2024-10-29T11:11:31.914Z] 8000.39 IOPS, 31.25 MiB/s [2024-10-29T11:11:31.914Z] 8041.62 IOPS, 31.41 MiB/s [2024-10-29T11:11:31.914Z] [2024-10-29 11:11:20.866635] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:26.417 8084.35 IOPS, 31.58 MiB/s [2024-10-29T11:11:31.914Z] 8126.91 IOPS, 31.75 MiB/s [2024-10-29T11:11:31.914Z] 8166.17 IOPS, 31.90 MiB/s [2024-10-29T11:11:31.914Z] 8203.43 IOPS, 32.04 MiB/s [2024-10-29T11:11:31.914Z] 8231.20 IOPS, 32.15 MiB/s [2024-10-29T11:11:31.914Z] 8265.73 IOPS, 32.29 MiB/s [2024-10-29T11:11:31.914Z] 8296.71 IOPS, 32.41 MiB/s [2024-10-29T11:11:31.914Z] 8327.49 IOPS, 32.53 MiB/s [2024-10-29T11:11:31.914Z] 8356.00 IOPS, 32.64 MiB/s [2024-10-29T11:11:31.914Z] 8384.38 IOPS, 32.75 MiB/s [2024-10-29T11:11:31.914Z] Received shutdown signal, test time was about 55.432790 seconds 00:22:26.417 00:22:26.417 Latency(us) 00:22:26.417 [2024-10-29T11:11:31.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.417 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:26.417 Verification LBA range: start 0x0 length 0x4000 00:22:26.417 Nvme0n1 : 55.43 8392.92 32.78 0.00 0.00 15221.29 160.12 7015926.69 00:22:26.417 [2024-10-29T11:11:31.914Z] =================================================================================================================== 00:22:26.417 [2024-10-29T11:11:31.914Z] Total : 8392.92 32.78 0.00 0.00 15221.29 160.12 7015926.69 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:26.417 rmmod nvme_tcp 00:22:26.417 rmmod nvme_fabrics 00:22:26.417 rmmod nvme_keyring 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 95981 ']' 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 95981 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 95981 ']' 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 95981 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95981 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:26.417 killing process with pid 95981 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95981' 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 95981 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 95981 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:26.417 00:22:26.417 real 1m0.609s 00:22:26.417 user 2m47.948s 00:22:26.417 sys 0m18.141s 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:26.417 ************************************ 00:22:26.417 END TEST nvmf_host_multipath 00:22:26.417 ************************************ 00:22:26.417 11:11:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:26.677 11:11:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:26.677 11:11:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:26.677 11:11:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.677 11:11:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.677 ************************************ 00:22:26.677 START TEST nvmf_timeout 00:22:26.677 ************************************ 00:22:26.677 11:11:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:26.677 * Looking for test storage... 00:22:26.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:26.677 11:11:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:26.677 11:11:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:22:26.677 11:11:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:26.677 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.678 --rc genhtml_branch_coverage=1 00:22:26.678 --rc genhtml_function_coverage=1 00:22:26.678 --rc genhtml_legend=1 00:22:26.678 --rc geninfo_all_blocks=1 00:22:26.678 --rc geninfo_unexecuted_blocks=1 00:22:26.678 00:22:26.678 ' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.678 --rc genhtml_branch_coverage=1 00:22:26.678 --rc genhtml_function_coverage=1 00:22:26.678 --rc genhtml_legend=1 00:22:26.678 --rc geninfo_all_blocks=1 00:22:26.678 --rc geninfo_unexecuted_blocks=1 00:22:26.678 00:22:26.678 ' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.678 --rc genhtml_branch_coverage=1 00:22:26.678 --rc genhtml_function_coverage=1 00:22:26.678 --rc genhtml_legend=1 00:22:26.678 --rc geninfo_all_blocks=1 00:22:26.678 --rc geninfo_unexecuted_blocks=1 00:22:26.678 00:22:26.678 ' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:26.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.678 --rc genhtml_branch_coverage=1 00:22:26.678 --rc genhtml_function_coverage=1 00:22:26.678 --rc genhtml_legend=1 00:22:26.678 --rc geninfo_all_blocks=1 00:22:26.678 --rc geninfo_unexecuted_blocks=1 00:22:26.678 00:22:26.678 ' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.678 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:26.678 Cannot find device "nvmf_init_br" 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:26.678 Cannot find device "nvmf_init_br2" 00:22:26.678 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:26.679 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:26.679 Cannot find device "nvmf_tgt_br" 00:22:26.679 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:26.679 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:26.679 Cannot find device "nvmf_tgt_br2" 00:22:26.679 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:26.679 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:26.938 Cannot find device "nvmf_init_br" 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:26.938 Cannot find device "nvmf_init_br2" 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:26.938 Cannot find device "nvmf_tgt_br" 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:26.938 Cannot find device "nvmf_tgt_br2" 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:26.938 Cannot find device "nvmf_br" 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:26.938 Cannot find device "nvmf_init_if" 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:26.938 Cannot find device "nvmf_init_if2" 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:26.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:26.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:26.938 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:27.197 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:27.197 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:22:27.197 00:22:27.197 --- 10.0.0.3 ping statistics --- 00:22:27.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.197 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:27.197 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:27.197 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:22:27.197 00:22:27.197 --- 10.0.0.4 ping statistics --- 00:22:27.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.197 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:27.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:22:27.197 00:22:27.197 --- 10.0.0.1 ping statistics --- 00:22:27.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.197 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:27.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:22:27.197 00:22:27.197 --- 10.0.0.2 ping statistics --- 00:22:27.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.197 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.197 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.198 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:27.198 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=97191 00:22:27.198 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:27.198 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 97191 00:22:27.198 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 97191 ']' 00:22:27.198 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.198 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:27.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.198 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.198 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:27.198 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:27.198 [2024-10-29 11:11:32.591474] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:22:27.198 [2024-10-29 11:11:32.591547] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.457 [2024-10-29 11:11:32.724306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:27.457 [2024-10-29 11:11:32.743832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.457 [2024-10-29 11:11:32.743888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.457 [2024-10-29 11:11:32.743897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.457 [2024-10-29 11:11:32.743903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.457 [2024-10-29 11:11:32.743909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.457 [2024-10-29 11:11:32.744770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.457 [2024-10-29 11:11:32.744781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.457 [2024-10-29 11:11:32.774941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:27.457 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:27.457 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:22:27.457 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.457 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.457 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:27.457 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.457 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:27.457 11:11:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:27.723 [2024-10-29 11:11:33.182725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.723 11:11:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:28.290 Malloc0 00:22:28.290 11:11:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.290 11:11:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:28.548 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:28.807 [2024-10-29 11:11:34.230758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:28.807 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97232 00:22:28.807 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:28.807 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97232 /var/tmp/bdevperf.sock 00:22:28.807 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 97232 ']' 00:22:28.807 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.807 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:28.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.807 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.807 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:28.807 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:28.807 [2024-10-29 11:11:34.294726] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:22:28.807 [2024-10-29 11:11:34.294820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97232 ] 00:22:29.066 [2024-10-29 11:11:34.437317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.066 [2024-10-29 11:11:34.455821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.066 [2024-10-29 11:11:34.482819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:29.066 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:29.066 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:22:29.066 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:29.325 11:11:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:29.582 NVMe0n1 00:22:29.582 11:11:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97243 00:22:29.583 11:11:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:29.583 11:11:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:29.841 Running I/O for 10 seconds... 00:22:30.777 11:11:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:31.039 9152.00 IOPS, 35.75 MiB/s [2024-10-29T11:11:36.536Z] [2024-10-29 11:11:36.343590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab980 is same with the state(6) to be set 00:22:31.039 [2024-10-29 11:11:36.343637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab980 is same with the state(6) to be set 00:22:31.039 [2024-10-29 11:11:36.343663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab980 is same with the state(6) to be set 00:22:31.039 [2024-10-29 11:11:36.343670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab980 is same with the state(6) to be set 00:22:31.039 [2024-10-29 11:11:36.343678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab980 is same with the state(6) to be set 00:22:31.039 [2024-10-29 11:11:36.344314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.039 [2024-10-29 11:11:36.344868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.344981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.344991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.345000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.345011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.345019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.345030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.345038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.345049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.345058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.345068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.345077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.345087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.345097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.345107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.345115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.345126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.345134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.345145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.345153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.039 [2024-10-29 11:11:36.345163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.039 [2024-10-29 11:11:36.345172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.040 [2024-10-29 11:11:36.345971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.040 [2024-10-29 11:11:36.345981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.040 [2024-10-29 11:11:36.345990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.041 [2024-10-29 11:11:36.346384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.041 [2024-10-29 11:11:36.346702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd95c00 is same with the state(6) to be set 00:22:31.041 [2024-10-29 11:11:36.346724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.041 [2024-10-29 11:11:36.346732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.041 [2024-10-29 11:11:36.346740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87800 len:8 PRP1 0x0 PRP2 0x0 00:22:31.041 [2024-10-29 11:11:36.346748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.041 [2024-10-29 11:11:36.346766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.041 [2024-10-29 11:11:36.346773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88352 len:8 PRP1 0x0 PRP2 0x0 00:22:31.041 [2024-10-29 11:11:36.346782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.041 [2024-10-29 11:11:36.346798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.041 [2024-10-29 11:11:36.346805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88360 len:8 PRP1 0x0 PRP2 0x0 00:22:31.041 [2024-10-29 11:11:36.346814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.041 [2024-10-29 11:11:36.346829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.041 [2024-10-29 11:11:36.346837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88368 len:8 PRP1 0x0 PRP2 0x0 00:22:31.041 [2024-10-29 11:11:36.346845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.041 [2024-10-29 11:11:36.346854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.346861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.346869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88376 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.346877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.346886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.346893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.346903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88384 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.346914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.346923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.346931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.346939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88392 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.346947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.346956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.346963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.346971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88400 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.346979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.346988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.346995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.347003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88408 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.347011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.347027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.347034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88416 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.347043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.347059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.347066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88424 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.347075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.347091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.347098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88432 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.347107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.347123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.347131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88440 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.347139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.347155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.347165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88448 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.347175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:31.042 [2024-10-29 11:11:36.347192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:31.042 [2024-10-29 11:11:36.347200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88456 len:8 PRP1 0x0 PRP2 0x0 00:22:31.042 [2024-10-29 11:11:36.347208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.042 [2024-10-29 11:11:36.347341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.042 [2024-10-29 11:11:36.347361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.042 [2024-10-29 11:11:36.347395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.042 [2024-10-29 11:11:36.347413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.042 [2024-10-29 11:11:36.347422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd77040 is same with the state(6) to be set 00:22:31.042 [2024-10-29 11:11:36.347635] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:31.042 [2024-10-29 11:11:36.347665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd77040 (9): Bad file descriptor 00:22:31.042 [2024-10-29 11:11:36.347769] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.042 [2024-10-29 11:11:36.347791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd77040 with addr=10.0.0.3, port=4420 00:22:31.042 [2024-10-29 11:11:36.347801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd77040 is same with the state(6) to be set 00:22:31.042 [2024-10-29 11:11:36.347819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd77040 (9): Bad file descriptor 00:22:31.042 [2024-10-29 11:11:36.347835] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:31.042 [2024-10-29 11:11:36.347844] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:31.042 [2024-10-29 11:11:36.347854] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:31.042 [2024-10-29 11:11:36.347874] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:31.042 [2024-10-29 11:11:36.347886] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:31.042 11:11:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:32.924 5465.00 IOPS, 21.35 MiB/s [2024-10-29T11:11:38.421Z] 3643.33 IOPS, 14.23 MiB/s [2024-10-29T11:11:38.421Z] [2024-10-29 11:11:38.348042] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.924 [2024-10-29 11:11:38.348118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd77040 with addr=10.0.0.3, port=4420 00:22:32.924 [2024-10-29 11:11:38.348133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd77040 is same with the state(6) to be set 00:22:32.924 [2024-10-29 11:11:38.348153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd77040 (9): Bad file descriptor 00:22:32.924 [2024-10-29 11:11:38.348171] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:32.924 [2024-10-29 11:11:38.348180] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:32.924 [2024-10-29 11:11:38.348190] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:32.924 [2024-10-29 11:11:38.348214] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:32.924 [2024-10-29 11:11:38.348226] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:32.924 11:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:32.924 11:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:32.924 11:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:33.182 11:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:33.182 11:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:33.182 11:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:33.183 11:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:33.442 11:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:33.442 11:11:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:35.079 2732.50 IOPS, 10.67 MiB/s [2024-10-29T11:11:40.576Z] 2186.00 IOPS, 8.54 MiB/s [2024-10-29T11:11:40.576Z] [2024-10-29 11:11:40.348418] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:35.079 [2024-10-29 11:11:40.348497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd77040 with addr=10.0.0.3, port=4420 00:22:35.079 [2024-10-29 11:11:40.348511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd77040 is same with the state(6) to be set 00:22:35.079 [2024-10-29 11:11:40.348533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd77040 (9): Bad file descriptor 00:22:35.079 [2024-10-29 11:11:40.348551] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:35.079 [2024-10-29 11:11:40.348585] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:35.079 [2024-10-29 11:11:40.348611] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:35.079 [2024-10-29 11:11:40.348644] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:35.079 [2024-10-29 11:11:40.348657] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:36.952 1821.67 IOPS, 7.12 MiB/s [2024-10-29T11:11:42.449Z] 1561.43 IOPS, 6.10 MiB/s [2024-10-29T11:11:42.449Z] [2024-10-29 11:11:42.348719] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:36.952 [2024-10-29 11:11:42.348774] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:36.952 [2024-10-29 11:11:42.348785] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:36.952 [2024-10-29 11:11:42.348794] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:36.952 [2024-10-29 11:11:42.348817] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:37.889 1366.25 IOPS, 5.34 MiB/s 00:22:37.889 Latency(us) 00:22:37.889 [2024-10-29T11:11:43.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.889 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:37.889 Verification LBA range: start 0x0 length 0x4000 00:22:37.889 NVMe0n1 : 8.17 1337.16 5.22 15.66 0.00 94491.18 3321.48 7015926.69 00:22:37.889 [2024-10-29T11:11:43.386Z] =================================================================================================================== 00:22:37.889 [2024-10-29T11:11:43.386Z] Total : 1337.16 5.22 15.66 0.00 94491.18 3321.48 7015926.69 00:22:37.889 { 00:22:37.889 "results": [ 00:22:37.889 { 00:22:37.889 "job": "NVMe0n1", 00:22:37.889 "core_mask": "0x4", 00:22:37.889 "workload": "verify", 00:22:37.889 "status": "finished", 00:22:37.889 "verify_range": { 00:22:37.889 "start": 0, 00:22:37.889 "length": 16384 00:22:37.889 }, 00:22:37.889 "queue_depth": 128, 00:22:37.889 "io_size": 4096, 00:22:37.889 "runtime": 8.174036, 00:22:37.889 "iops": 1337.1607367523216, 00:22:37.889 "mibps": 5.223284127938756, 00:22:37.889 "io_failed": 128, 00:22:37.889 "io_timeout": 0, 00:22:37.889 "avg_latency_us": 94491.1815689176, 00:22:37.889 "min_latency_us": 3321.4836363636364, 00:22:37.889 "max_latency_us": 7015926.69090909 00:22:37.889 } 00:22:37.889 ], 00:22:37.889 "core_count": 1 00:22:37.889 } 00:22:38.461 11:11:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:38.461 11:11:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:38.461 11:11:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:38.720 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:38.720 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:38.720 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:38.720 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97243 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97232 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 97232 ']' 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 97232 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97232 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:38.980 killing process with pid 97232 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97232' 00:22:38.980 Received shutdown signal, test time was about 9.277029 seconds 00:22:38.980 00:22:38.980 Latency(us) 00:22:38.980 [2024-10-29T11:11:44.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.980 [2024-10-29T11:11:44.477Z] =================================================================================================================== 00:22:38.980 [2024-10-29T11:11:44.477Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 97232 00:22:38.980 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 97232 00:22:39.240 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:39.499 [2024-10-29 11:11:44.808086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:39.499 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97366 00:22:39.499 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:39.499 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97366 /var/tmp/bdevperf.sock 00:22:39.499 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 97366 ']' 00:22:39.499 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.499 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:39.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.499 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.499 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:39.499 11:11:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:39.499 [2024-10-29 11:11:44.871076] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:22:39.499 [2024-10-29 11:11:44.871176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97366 ] 00:22:39.758 [2024-10-29 11:11:45.013113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.758 [2024-10-29 11:11:45.032548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.758 [2024-10-29 11:11:45.062160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:39.758 11:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:39.758 11:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:22:39.758 11:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:40.017 11:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:40.275 NVMe0n1 00:22:40.275 11:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97382 00:22:40.275 11:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:40.276 11:11:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:40.534 Running I/O for 10 seconds... 00:22:41.473 11:11:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:41.473 8084.00 IOPS, 31.58 MiB/s [2024-10-29T11:11:46.970Z] [2024-10-29 11:11:46.947406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.473 [2024-10-29 11:11:46.947464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.473 [2024-10-29 11:11:46.947492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.473 [2024-10-29 11:11:46.947507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.473 [2024-10-29 11:11:46.947516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.473 [2024-10-29 11:11:46.947524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.473 [2024-10-29 11:11:46.947533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.474 [2024-10-29 11:11:46.947541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.947550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf040 is same with the state(6) to be set 00:22:41.474 [2024-10-29 11:11:46.947814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.474 [2024-10-29 11:11:46.947837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.947856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.947866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.947877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.947886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.947897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.947906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.947917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.947925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.947936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.947944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.947955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.947963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.947973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.947982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.947992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.474 [2024-10-29 11:11:46.948510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.474 [2024-10-29 11:11:46.948520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.948983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.948993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.475 [2024-10-29 11:11:46.949284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.475 [2024-10-29 11:11:46.949294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.949986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.949996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.950005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.950015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.950023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.950034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.476 [2024-10-29 11:11:46.950042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.950052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.476 [2024-10-29 11:11:46.950061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.476 [2024-10-29 11:11:46.950071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.477 [2024-10-29 11:11:46.950329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:41.477 [2024-10-29 11:11:46.950348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaedc00 is same with the state(6) to be set 00:22:41.477 [2024-10-29 11:11:46.950378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:41.477 [2024-10-29 11:11:46.950388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:41.477 [2024-10-29 11:11:46.950411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74904 len:8 PRP1 0x0 PRP2 0x0 00:22:41.477 [2024-10-29 11:11:46.950421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.477 [2024-10-29 11:11:46.950690] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:41.477 [2024-10-29 11:11:46.950723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf040 (9): Bad file descriptor 00:22:41.477 [2024-10-29 11:11:46.950826] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.477 [2024-10-29 11:11:46.950856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf040 with addr=10.0.0.3, port=4420 00:22:41.477 [2024-10-29 11:11:46.950867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf040 is same with the state(6) to be set 00:22:41.477 [2024-10-29 11:11:46.950885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf040 (9): Bad file descriptor 00:22:41.477 [2024-10-29 11:11:46.950900] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:41.477 [2024-10-29 11:11:46.950909] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:41.477 [2024-10-29 11:11:46.950920] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:41.477 [2024-10-29 11:11:46.950939] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:41.477 [2024-10-29 11:11:46.950950] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:41.477 11:11:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:42.671 4618.00 IOPS, 18.04 MiB/s [2024-10-29T11:11:48.168Z] [2024-10-29 11:11:47.951028] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.671 [2024-10-29 11:11:47.951099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf040 with addr=10.0.0.3, port=4420 00:22:42.671 [2024-10-29 11:11:47.951113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf040 is same with the state(6) to be set 00:22:42.672 [2024-10-29 11:11:47.951133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf040 (9): Bad file descriptor 00:22:42.672 [2024-10-29 11:11:47.951149] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:42.672 [2024-10-29 11:11:47.951157] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:42.672 [2024-10-29 11:11:47.951168] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:42.672 [2024-10-29 11:11:47.951188] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:42.672 [2024-10-29 11:11:47.951199] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:42.672 11:11:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:42.931 [2024-10-29 11:11:48.226628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:42.931 11:11:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97382 00:22:43.500 3078.67 IOPS, 12.03 MiB/s [2024-10-29T11:11:48.997Z] [2024-10-29 11:11:48.966005] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:45.385 2309.00 IOPS, 9.02 MiB/s [2024-10-29T11:11:51.824Z] 3649.80 IOPS, 14.26 MiB/s [2024-10-29T11:11:53.217Z] 4843.67 IOPS, 18.92 MiB/s [2024-10-29T11:11:54.156Z] 5693.14 IOPS, 22.24 MiB/s [2024-10-29T11:11:55.092Z] 6331.12 IOPS, 24.73 MiB/s [2024-10-29T11:11:56.027Z] 6850.78 IOPS, 26.76 MiB/s [2024-10-29T11:11:56.027Z] 7253.70 IOPS, 28.33 MiB/s 00:22:50.530 Latency(us) 00:22:50.530 [2024-10-29T11:11:56.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.530 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:50.530 Verification LBA range: start 0x0 length 0x4000 00:22:50.530 NVMe0n1 : 10.01 7255.89 28.34 0.00 0.00 17606.18 1854.37 3019898.88 00:22:50.530 [2024-10-29T11:11:56.027Z] =================================================================================================================== 00:22:50.530 [2024-10-29T11:11:56.027Z] Total : 7255.89 28.34 0.00 0.00 17606.18 1854.37 3019898.88 00:22:50.530 { 00:22:50.530 "results": [ 00:22:50.530 { 00:22:50.530 "job": "NVMe0n1", 00:22:50.530 "core_mask": "0x4", 00:22:50.530 "workload": "verify", 00:22:50.530 "status": "finished", 00:22:50.530 "verify_range": { 00:22:50.530 "start": 0, 00:22:50.530 "length": 16384 00:22:50.530 }, 00:22:50.530 "queue_depth": 128, 00:22:50.530 "io_size": 4096, 00:22:50.530 "runtime": 10.008415, 00:22:50.530 "iops": 7255.894165060102, 00:22:50.530 "mibps": 28.343336582266023, 00:22:50.530 "io_failed": 0, 00:22:50.530 "io_timeout": 0, 00:22:50.530 "avg_latency_us": 17606.18405908715, 00:22:50.530 "min_latency_us": 1854.370909090909, 00:22:50.530 "max_latency_us": 3019898.88 00:22:50.530 } 00:22:50.530 ], 00:22:50.530 "core_count": 1 00:22:50.530 } 00:22:50.530 11:11:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97487 00:22:50.530 11:11:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:50.530 11:11:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:50.530 Running I/O for 10 seconds... 00:22:51.465 11:11:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:51.728 7965.00 IOPS, 31.11 MiB/s [2024-10-29T11:11:57.225Z] [2024-10-29 11:11:57.092985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.728 [2024-10-29 11:11:57.093045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.728 [2024-10-29 11:11:57.093544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.728 [2024-10-29 11:11:57.093554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.093984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.093995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.729 [2024-10-29 11:11:57.094387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.729 [2024-10-29 11:11:57.094399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.094976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.094994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.730 [2024-10-29 11:11:57.095210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.730 [2024-10-29 11:11:57.095219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-10-29 11:11:57.095238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-10-29 11:11:57.095258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-10-29 11:11:57.095277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-10-29 11:11:57.095297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-10-29 11:11:57.095318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-10-29 11:11:57.095338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.731 [2024-10-29 11:11:57.095651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.731 [2024-10-29 11:11:57.095670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb01330 is same with the state(6) to be set 00:22:51.731 [2024-10-29 11:11:57.095692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.731 [2024-10-29 11:11:57.095699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.731 [2024-10-29 11:11:57.095708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73944 len:8 PRP1 0x0 PRP2 0x0 00:22:51.731 [2024-10-29 11:11:57.095718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.731 [2024-10-29 11:11:57.095972] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:51.731 [2024-10-29 11:11:57.096051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf040 (9): Bad file descriptor 00:22:51.731 [2024-10-29 11:11:57.096146] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.731 [2024-10-29 11:11:57.096176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf040 with addr=10.0.0.3, port=4420 00:22:51.731 [2024-10-29 11:11:57.096188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf040 is same with the state(6) to be set 00:22:51.731 [2024-10-29 11:11:57.096207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf040 (9): Bad file descriptor 00:22:51.731 [2024-10-29 11:11:57.096222] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:51.731 [2024-10-29 11:11:57.096232] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:51.731 [2024-10-29 11:11:57.096242] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:51.731 [2024-10-29 11:11:57.096263] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:51.731 [2024-10-29 11:11:57.096275] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:51.731 11:11:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:52.669 4558.00 IOPS, 17.80 MiB/s [2024-10-29T11:11:58.166Z] [2024-10-29 11:11:58.096381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.669 [2024-10-29 11:11:58.096457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf040 with addr=10.0.0.3, port=4420 00:22:52.669 [2024-10-29 11:11:58.096478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf040 is same with the state(6) to be set 00:22:52.669 [2024-10-29 11:11:58.096502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf040 (9): Bad file descriptor 00:22:52.669 [2024-10-29 11:11:58.096524] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:52.669 [2024-10-29 11:11:58.096534] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:52.669 [2024-10-29 11:11:58.096545] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:52.669 [2024-10-29 11:11:58.096592] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:52.669 [2024-10-29 11:11:58.096606] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:53.606 3038.67 IOPS, 11.87 MiB/s [2024-10-29T11:11:59.103Z] [2024-10-29 11:11:59.096691] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.606 [2024-10-29 11:11:59.096745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf040 with addr=10.0.0.3, port=4420 00:22:53.606 [2024-10-29 11:11:59.096758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf040 is same with the state(6) to be set 00:22:53.606 [2024-10-29 11:11:59.096777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf040 (9): Bad file descriptor 00:22:53.606 [2024-10-29 11:11:59.096793] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:53.606 [2024-10-29 11:11:59.096802] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:53.606 [2024-10-29 11:11:59.096812] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:53.606 [2024-10-29 11:11:59.096833] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:53.606 [2024-10-29 11:11:59.096845] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:54.799 2279.00 IOPS, 8.90 MiB/s [2024-10-29T11:12:00.296Z] [2024-10-29 11:12:00.100104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.799 [2024-10-29 11:12:00.100202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacf040 with addr=10.0.0.3, port=4420 00:22:54.799 [2024-10-29 11:12:00.100217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xacf040 is same with the state(6) to be set 00:22:54.799 [2024-10-29 11:12:00.100495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacf040 (9): Bad file descriptor 00:22:54.799 [2024-10-29 11:12:00.100777] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:54.799 [2024-10-29 11:12:00.100800] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:54.799 [2024-10-29 11:12:00.100813] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:54.799 [2024-10-29 11:12:00.104439] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:54.799 [2024-10-29 11:12:00.104491] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:54.799 11:12:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:55.057 [2024-10-29 11:12:00.375523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:55.057 11:12:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97487 00:22:55.883 1823.20 IOPS, 7.12 MiB/s [2024-10-29T11:12:01.380Z] [2024-10-29 11:12:01.140664] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:57.755 2972.50 IOPS, 11.61 MiB/s [2024-10-29T11:12:04.189Z] 4017.86 IOPS, 15.69 MiB/s [2024-10-29T11:12:05.125Z] 4825.88 IOPS, 18.85 MiB/s [2024-10-29T11:12:06.077Z] 5491.89 IOPS, 21.45 MiB/s [2024-10-29T11:12:06.077Z] 6017.90 IOPS, 23.51 MiB/s 00:23:00.580 Latency(us) 00:23:00.580 [2024-10-29T11:12:06.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.580 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:00.580 Verification LBA range: start 0x0 length 0x4000 00:23:00.580 NVMe0n1 : 10.01 6024.62 23.53 4057.48 0.00 12673.06 577.16 3019898.88 00:23:00.580 [2024-10-29T11:12:06.077Z] =================================================================================================================== 00:23:00.580 [2024-10-29T11:12:06.077Z] Total : 6024.62 23.53 4057.48 0.00 12673.06 0.00 3019898.88 00:23:00.580 { 00:23:00.580 "results": [ 00:23:00.580 { 00:23:00.580 "job": "NVMe0n1", 00:23:00.580 "core_mask": "0x4", 00:23:00.580 "workload": "verify", 00:23:00.580 "status": "finished", 00:23:00.580 "verify_range": { 00:23:00.580 "start": 0, 00:23:00.580 "length": 16384 00:23:00.580 }, 00:23:00.580 "queue_depth": 128, 00:23:00.580 "io_size": 4096, 00:23:00.580 "runtime": 10.00744, 00:23:00.580 "iops": 6024.617684442775, 00:23:00.580 "mibps": 23.53366282985459, 00:23:00.580 "io_failed": 40605, 00:23:00.580 "io_timeout": 0, 00:23:00.580 "avg_latency_us": 12673.064969004989, 00:23:00.580 "min_latency_us": 577.1636363636363, 00:23:00.580 "max_latency_us": 3019898.88 00:23:00.580 } 00:23:00.580 ], 00:23:00.580 "core_count": 1 00:23:00.580 } 00:23:00.580 11:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97366 00:23:00.580 11:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 97366 ']' 00:23:00.580 11:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 97366 00:23:00.580 11:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:23:00.580 11:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:00.580 11:12:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97366 00:23:00.581 killing process with pid 97366 00:23:00.581 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.581 00:23:00.581 Latency(us) 00:23:00.581 [2024-10-29T11:12:06.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.581 [2024-10-29T11:12:06.078Z] =================================================================================================================== 00:23:00.581 [2024-10-29T11:12:06.078Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.581 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:00.581 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:00.581 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97366' 00:23:00.581 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 97366 00:23:00.581 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 97366 00:23:00.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.841 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97602 00:23:00.841 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:00.841 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97602 /var/tmp/bdevperf.sock 00:23:00.841 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 97602 ']' 00:23:00.841 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.841 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:00.841 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.841 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:00.841 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:00.841 [2024-10-29 11:12:06.197606] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:23:00.841 [2024-10-29 11:12:06.198225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97602 ] 00:23:01.100 [2024-10-29 11:12:06.345463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.100 [2024-10-29 11:12:06.364171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.100 [2024-10-29 11:12:06.391312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:01.100 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:01.100 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:23:01.100 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97605 00:23:01.100 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97602 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:01.100 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:01.359 11:12:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:01.617 NVMe0n1 00:23:01.617 11:12:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.617 11:12:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97641 00:23:01.617 11:12:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:01.876 Running I/O for 10 seconds... 00:23:02.813 11:12:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:02.813 17272.00 IOPS, 67.47 MiB/s [2024-10-29T11:12:08.310Z] [2024-10-29 11:12:08.277238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.813 [2024-10-29 11:12:08.277439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcac5c0 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.277975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.814 [2024-10-29 11:12:08.278016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.814 [2024-10-29 11:12:08.278039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.814 [2024-10-29 11:12:08.278058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.814 [2024-10-29 11:12:08.278076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382040 is same with the state(6) to be set 00:23:02.814 [2024-10-29 11:12:08.278279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.814 [2024-10-29 11:12:08.278625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.814 [2024-10-29 11:12:08.278637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.278980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.278991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.815 [2024-10-29 11:12:08.279473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.815 [2024-10-29 11:12:08.279483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.279979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.279988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.816 [2024-10-29 11:12:08.280342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.816 [2024-10-29 11:12:08.280351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.280981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.280992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.817 [2024-10-29 11:12:08.281002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.281012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a0ca0 is same with the state(6) to be set 00:23:02.817 [2024-10-29 11:12:08.281024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.817 [2024-10-29 11:12:08.281031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.817 [2024-10-29 11:12:08.281039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114808 len:8 PRP1 0x0 PRP2 0x0 00:23:02.817 [2024-10-29 11:12:08.281048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.817 [2024-10-29 11:12:08.281342] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:02.817 [2024-10-29 11:12:08.281403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1382040 (9): Bad file descriptor 00:23:02.817 [2024-10-29 11:12:08.281502] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:02.817 [2024-10-29 11:12:08.281534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1382040 with addr=10.0.0.3, port=4420 00:23:02.817 [2024-10-29 11:12:08.281546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382040 is same with the state(6) to be set 00:23:02.817 [2024-10-29 11:12:08.281565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1382040 (9): Bad file descriptor 00:23:02.817 [2024-10-29 11:12:08.281581] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:02.817 [2024-10-29 11:12:08.281591] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:02.817 [2024-10-29 11:12:08.281601] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:02.817 [2024-10-29 11:12:08.281623] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:02.817 [2024-10-29 11:12:08.281635] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:02.817 11:12:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97641 00:23:04.687 9812.00 IOPS, 38.33 MiB/s [2024-10-29T11:12:10.442Z] 6541.33 IOPS, 25.55 MiB/s [2024-10-29T11:12:10.442Z] [2024-10-29 11:12:10.295865] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.945 [2024-10-29 11:12:10.295945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1382040 with addr=10.0.0.3, port=4420 00:23:04.945 [2024-10-29 11:12:10.295961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382040 is same with the state(6) to be set 00:23:04.945 [2024-10-29 11:12:10.295992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1382040 (9): Bad file descriptor 00:23:04.945 [2024-10-29 11:12:10.296011] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:04.945 [2024-10-29 11:12:10.296020] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:04.945 [2024-10-29 11:12:10.296031] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:04.945 [2024-10-29 11:12:10.296058] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:04.945 [2024-10-29 11:12:10.296085] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:06.817 4906.00 IOPS, 19.16 MiB/s [2024-10-29T11:12:12.314Z] 3924.80 IOPS, 15.33 MiB/s [2024-10-29T11:12:12.314Z] [2024-10-29 11:12:12.296238] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:06.817 [2024-10-29 11:12:12.296313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1382040 with addr=10.0.0.3, port=4420 00:23:06.817 [2024-10-29 11:12:12.296328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1382040 is same with the state(6) to be set 00:23:06.817 [2024-10-29 11:12:12.296349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1382040 (9): Bad file descriptor 00:23:06.817 [2024-10-29 11:12:12.296365] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:06.817 [2024-10-29 11:12:12.296374] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:06.817 [2024-10-29 11:12:12.296383] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:06.817 [2024-10-29 11:12:12.296420] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:06.817 [2024-10-29 11:12:12.296434] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:08.687 3270.67 IOPS, 12.78 MiB/s [2024-10-29T11:12:14.442Z] 2803.43 IOPS, 10.95 MiB/s [2024-10-29T11:12:14.442Z] [2024-10-29 11:12:14.296492] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:08.945 [2024-10-29 11:12:14.296540] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:08.945 [2024-10-29 11:12:14.296572] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:08.945 [2024-10-29 11:12:14.296597] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:23:08.945 [2024-10-29 11:12:14.296621] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:09.882 2453.00 IOPS, 9.58 MiB/s 00:23:09.882 Latency(us) 00:23:09.882 [2024-10-29T11:12:15.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.882 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:09.882 NVMe0n1 : 8.14 2409.89 9.41 15.72 0.00 52713.23 7000.44 7015926.69 00:23:09.882 [2024-10-29T11:12:15.379Z] =================================================================================================================== 00:23:09.882 [2024-10-29T11:12:15.379Z] Total : 2409.89 9.41 15.72 0.00 52713.23 7000.44 7015926.69 00:23:09.882 { 00:23:09.882 "results": [ 00:23:09.882 { 00:23:09.882 "job": "NVMe0n1", 00:23:09.882 "core_mask": "0x4", 00:23:09.882 "workload": "randread", 00:23:09.882 "status": "finished", 00:23:09.882 "queue_depth": 128, 00:23:09.882 "io_size": 4096, 00:23:09.882 "runtime": 8.143117, 00:23:09.882 "iops": 2409.888007258154, 00:23:09.882 "mibps": 9.413625028352165, 00:23:09.882 "io_failed": 128, 00:23:09.882 "io_timeout": 0, 00:23:09.882 "avg_latency_us": 52713.233635995435, 00:23:09.882 "min_latency_us": 7000.436363636363, 00:23:09.882 "max_latency_us": 7015926.69090909 00:23:09.882 } 00:23:09.882 ], 00:23:09.882 "core_count": 1 00:23:09.882 } 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:09.882 Attaching 5 probes... 00:23:09.882 1323.075626: reset bdev controller NVMe0 00:23:09.882 1323.182539: reconnect bdev controller NVMe0 00:23:09.882 3337.490564: reconnect delay bdev controller NVMe0 00:23:09.882 3337.523642: reconnect bdev controller NVMe0 00:23:09.882 5337.871616: reconnect delay bdev controller NVMe0 00:23:09.882 5337.905162: reconnect bdev controller NVMe0 00:23:09.882 7338.203178: reconnect delay bdev controller NVMe0 00:23:09.882 7338.233385: reconnect bdev controller NVMe0 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97605 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97602 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 97602 ']' 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 97602 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97602 00:23:09.882 killing process with pid 97602 00:23:09.882 Received shutdown signal, test time was about 8.207873 seconds 00:23:09.882 00:23:09.882 Latency(us) 00:23:09.882 [2024-10-29T11:12:15.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.882 [2024-10-29T11:12:15.379Z] =================================================================================================================== 00:23:09.882 [2024-10-29T11:12:15.379Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97602' 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 97602 00:23:09.882 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 97602 00:23:10.141 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:10.399 rmmod nvme_tcp 00:23:10.399 rmmod nvme_fabrics 00:23:10.399 rmmod nvme_keyring 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 97191 ']' 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 97191 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 97191 ']' 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 97191 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97191 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97191' 00:23:10.399 killing process with pid 97191 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 97191 00:23:10.399 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 97191 00:23:10.657 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:10.657 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:10.657 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:10.657 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:10.657 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:10.657 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:10.657 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:10.657 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:10.657 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:10.657 11:12:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:10.657 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:10.916 00:23:10.916 real 0m44.311s 00:23:10.916 user 2m9.924s 00:23:10.916 sys 0m5.177s 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:10.916 ************************************ 00:23:10.916 END TEST nvmf_timeout 00:23:10.916 ************************************ 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:10.916 ************************************ 00:23:10.916 END TEST nvmf_host 00:23:10.916 ************************************ 00:23:10.916 00:23:10.916 real 5m40.150s 00:23:10.916 user 16m0.241s 00:23:10.916 sys 1m15.795s 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:10.916 11:12:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.916 11:12:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:10.916 11:12:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:10.916 ************************************ 00:23:10.916 END TEST nvmf_tcp 00:23:10.916 ************************************ 00:23:10.916 00:23:10.916 real 14m59.060s 00:23:10.916 user 39m28.478s 00:23:10.916 sys 4m3.930s 00:23:10.916 11:12:16 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:10.916 11:12:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:10.916 11:12:16 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:23:10.916 11:12:16 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:10.916 11:12:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:10.916 11:12:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:10.916 11:12:16 -- common/autotest_common.sh@10 -- # set +x 00:23:10.916 ************************************ 00:23:10.916 START TEST nvmf_dif 00:23:10.916 ************************************ 00:23:10.916 11:12:16 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:11.176 * Looking for test storage... 00:23:11.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:11.176 11:12:16 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:11.176 11:12:16 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:23:11.176 11:12:16 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:11.176 11:12:16 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.176 11:12:16 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:11.176 11:12:16 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.176 11:12:16 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:11.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.176 --rc genhtml_branch_coverage=1 00:23:11.176 --rc genhtml_function_coverage=1 00:23:11.176 --rc genhtml_legend=1 00:23:11.176 --rc geninfo_all_blocks=1 00:23:11.176 --rc geninfo_unexecuted_blocks=1 00:23:11.176 00:23:11.176 ' 00:23:11.176 11:12:16 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:11.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.176 --rc genhtml_branch_coverage=1 00:23:11.176 --rc genhtml_function_coverage=1 00:23:11.176 --rc genhtml_legend=1 00:23:11.176 --rc geninfo_all_blocks=1 00:23:11.176 --rc geninfo_unexecuted_blocks=1 00:23:11.176 00:23:11.176 ' 00:23:11.176 11:12:16 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:11.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.177 --rc genhtml_branch_coverage=1 00:23:11.177 --rc genhtml_function_coverage=1 00:23:11.177 --rc genhtml_legend=1 00:23:11.177 --rc geninfo_all_blocks=1 00:23:11.177 --rc geninfo_unexecuted_blocks=1 00:23:11.177 00:23:11.177 ' 00:23:11.177 11:12:16 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:11.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.177 --rc genhtml_branch_coverage=1 00:23:11.177 --rc genhtml_function_coverage=1 00:23:11.177 --rc genhtml_legend=1 00:23:11.177 --rc geninfo_all_blocks=1 00:23:11.177 --rc geninfo_unexecuted_blocks=1 00:23:11.177 00:23:11.177 ' 00:23:11.177 11:12:16 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=61a87890-fef5-4d39-ae0e-c34cd0a177b6 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.177 11:12:16 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.177 11:12:16 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.177 11:12:16 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.177 11:12:16 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.177 11:12:16 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.177 11:12:16 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.177 11:12:16 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.177 11:12:16 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:11.177 11:12:16 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.177 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.177 11:12:16 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:11.177 11:12:16 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:11.177 11:12:16 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:11.177 11:12:16 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:11.177 11:12:16 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.177 11:12:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:11.177 11:12:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:11.177 Cannot find device "nvmf_init_br" 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:11.177 Cannot find device "nvmf_init_br2" 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:11.177 Cannot find device "nvmf_tgt_br" 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:11.177 Cannot find device "nvmf_tgt_br2" 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:11.177 Cannot find device "nvmf_init_br" 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:11.177 11:12:16 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:11.177 Cannot find device "nvmf_init_br2" 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:11.436 Cannot find device "nvmf_tgt_br" 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:11.436 Cannot find device "nvmf_tgt_br2" 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:11.436 Cannot find device "nvmf_br" 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:11.436 Cannot find device "nvmf_init_if" 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:11.436 Cannot find device "nvmf_init_if2" 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:11.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:11.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:11.436 11:12:16 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:11.695 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:11.695 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:23:11.695 00:23:11.695 --- 10.0.0.3 ping statistics --- 00:23:11.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.695 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:11.695 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:11.695 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:23:11.695 00:23:11.695 --- 10.0.0.4 ping statistics --- 00:23:11.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.695 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:11.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:11.695 00:23:11.695 --- 10.0.0.1 ping statistics --- 00:23:11.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.695 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:11.695 11:12:16 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:11.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:23:11.695 00:23:11.695 --- 10.0.0.2 ping statistics --- 00:23:11.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.695 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:11.695 11:12:17 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.695 11:12:17 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:23:11.695 11:12:17 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:11.695 11:12:17 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:11.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:11.954 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:11.954 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:11.954 11:12:17 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.954 11:12:17 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:11.954 11:12:17 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:11.954 11:12:17 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.954 11:12:17 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:11.954 11:12:17 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:11.954 11:12:17 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:11.954 11:12:17 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:11.954 11:12:17 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:11.954 11:12:17 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:11.954 11:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:11.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.954 11:12:17 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=98139 00:23:11.954 11:12:17 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:11.954 11:12:17 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 98139 00:23:11.954 11:12:17 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 98139 ']' 00:23:11.954 11:12:17 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.954 11:12:17 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:11.954 11:12:17 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.954 11:12:17 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:11.954 11:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:12.215 [2024-10-29 11:12:17.501438] Starting SPDK v25.01-pre git sha1 12fc2abf1 / DPDK 23.11.0 initialization... 00:23:12.215 [2024-10-29 11:12:17.501731] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.215 [2024-10-29 11:12:17.656770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.215 [2024-10-29 11:12:17.680326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.215 [2024-10-29 11:12:17.680667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.215 [2024-10-29 11:12:17.680906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.215 [2024-10-29 11:12:17.681063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.215 [2024-10-29 11:12:17.681286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.215 [2024-10-29 11:12:17.681710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.478 [2024-10-29 11:12:17.717613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:12.478 11:12:17 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:12.478 11:12:17 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:23:12.478 11:12:17 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.478 11:12:17 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:12.478 11:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:12.478 11:12:17 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.478 11:12:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:12.478 11:12:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:12.478 11:12:17 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.478 11:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:12.478 [2024-10-29 11:12:17.817728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.478 11:12:17 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.478 11:12:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:12.478 11:12:17 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:12.478 11:12:17 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:12.479 11:12:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:12.479 ************************************ 00:23:12.479 START TEST fio_dif_1_default 00:23:12.479 ************************************ 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:12.479 bdev_null0 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:12.479 [2024-10-29 11:12:17.865898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.479 { 00:23:12.479 "params": { 00:23:12.479 "name": "Nvme$subsystem", 00:23:12.479 "trtype": "$TEST_TRANSPORT", 00:23:12.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.479 "adrfam": "ipv4", 00:23:12.479 "trsvcid": "$NVMF_PORT", 00:23:12.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.479 "hdgst": ${hdgst:-false}, 00:23:12.479 "ddgst": ${ddgst:-false} 00:23:12.479 }, 00:23:12.479 "method": "bdev_nvme_attach_controller" 00:23:12.479 } 00:23:12.479 EOF 00:23:12.479 )") 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:12.479 "params": { 00:23:12.479 "name": "Nvme0", 00:23:12.479 "trtype": "tcp", 00:23:12.479 "traddr": "10.0.0.3", 00:23:12.479 "adrfam": "ipv4", 00:23:12.479 "trsvcid": "4420", 00:23:12.479 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:12.479 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:12.479 "hdgst": false, 00:23:12.479 "ddgst": false 00:23:12.479 }, 00:23:12.479 "method": "bdev_nvme_attach_controller" 00:23:12.479 }' 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:12.479 11:12:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:12.738 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:12.738 fio-3.35 00:23:12.738 Starting 1 thread 00:23:24.946 00:23:24.946 filename0: (groupid=0, jobs=1): err= 0: pid=98198: Tue Oct 29 11:12:28 2024 00:23:24.946 read: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(399MiB/10001msec) 00:23:24.946 slat (nsec): min=5791, max=56391, avg=7396.70, stdev=2998.86 00:23:24.946 clat (usec): min=311, max=5401, avg=369.81, stdev=47.82 00:23:24.946 lat (usec): min=317, max=5428, avg=377.21, stdev=48.39 00:23:24.946 clat percentiles (usec): 00:23:24.946 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 338], 00:23:24.946 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 371], 00:23:24.946 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 437], 00:23:24.946 | 99.00th=[ 482], 99.50th=[ 502], 99.90th=[ 545], 99.95th=[ 586], 00:23:24.946 | 99.99th=[ 1319] 00:23:24.946 bw ( KiB/s): min=38272, max=41920, per=99.96%, avg=40848.84, stdev=825.92, samples=19 00:23:24.946 iops : min= 9568, max=10480, avg=10212.21, stdev=206.48, samples=19 00:23:24.946 lat (usec) : 500=99.50%, 750=0.49% 00:23:24.946 lat (msec) : 2=0.01%, 10=0.01% 00:23:24.946 cpu : usr=84.66%, sys=13.50%, ctx=138, majf=0, minf=0 00:23:24.946 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:24.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.946 issued rwts: total=102172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.946 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:24.946 00:23:24.946 Run status group 0 (all jobs): 00:23:24.946 READ: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=399MiB (418MB), run=10001-10001msec 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.946 00:23:24.946 real 0m10.888s 00:23:24.946 user 0m9.039s 00:23:24.946 sys 0m1.575s 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.946 ************************************ 00:23:24.946 END TEST fio_dif_1_default 00:23:24.946 ************************************ 00:23:24.946 11:12:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:24.946 11:12:28 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:24.946 11:12:28 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:24.946 11:12:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:24.946 ************************************ 00:23:24.946 START TEST fio_dif_1_multi_subsystems 00:23:24.946 ************************************ 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:24.946 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:24.947 bdev_null0 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:24.947 [2024-10-29 11:12:28.805659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:24.947 bdev_null1 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:24.947 { 00:23:24.947 "params": { 00:23:24.947 "name": "Nvme$subsystem", 00:23:24.947 "trtype": "$TEST_TRANSPORT", 00:23:24.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.947 "adrfam": "ipv4", 00:23:24.947 "trsvcid": "$NVMF_PORT", 00:23:24.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.947 "hdgst": ${hdgst:-false}, 00:23:24.947 "ddgst": ${ddgst:-false} 00:23:24.947 }, 00:23:24.947 "method": "bdev_nvme_attach_controller" 00:23:24.947 } 00:23:24.947 EOF 00:23:24.947 )") 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:24.947 { 00:23:24.947 "params": { 00:23:24.947 "name": "Nvme$subsystem", 00:23:24.947 "trtype": "$TEST_TRANSPORT", 00:23:24.947 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.947 "adrfam": "ipv4", 00:23:24.947 "trsvcid": "$NVMF_PORT", 00:23:24.947 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.947 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.947 "hdgst": ${hdgst:-false}, 00:23:24.947 "ddgst": ${ddgst:-false} 00:23:24.947 }, 00:23:24.947 "method": "bdev_nvme_attach_controller" 00:23:24.947 } 00:23:24.947 EOF 00:23:24.947 )") 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:24.947 "params": { 00:23:24.947 "name": "Nvme0", 00:23:24.947 "trtype": "tcp", 00:23:24.947 "traddr": "10.0.0.3", 00:23:24.947 "adrfam": "ipv4", 00:23:24.947 "trsvcid": "4420", 00:23:24.947 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.947 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:24.947 "hdgst": false, 00:23:24.947 "ddgst": false 00:23:24.947 }, 00:23:24.947 "method": "bdev_nvme_attach_controller" 00:23:24.947 },{ 00:23:24.947 "params": { 00:23:24.947 "name": "Nvme1", 00:23:24.947 "trtype": "tcp", 00:23:24.947 "traddr": "10.0.0.3", 00:23:24.947 "adrfam": "ipv4", 00:23:24.947 "trsvcid": "4420", 00:23:24.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.947 "hdgst": false, 00:23:24.947 "ddgst": false 00:23:24.947 }, 00:23:24.947 "method": "bdev_nvme_attach_controller" 00:23:24.947 }' 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:24.947 11:12:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.947 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:24.947 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:24.947 fio-3.35 00:23:24.947 Starting 2 threads 00:23:34.926 00:23:34.926 filename0: (groupid=0, jobs=1): err= 0: pid=98352: Tue Oct 29 11:12:39 2024 00:23:34.926 read: IOPS=5463, BW=21.3MiB/s (22.4MB/s)(213MiB/10001msec) 00:23:34.926 slat (nsec): min=4965, max=56867, avg=12189.96, stdev=4164.37 00:23:34.926 clat (usec): min=557, max=1770, avg=699.27, stdev=52.72 00:23:34.926 lat (usec): min=564, max=1788, avg=711.46, stdev=53.72 00:23:34.926 clat percentiles (usec): 00:23:34.926 | 1.00th=[ 594], 5.00th=[ 619], 10.00th=[ 644], 20.00th=[ 660], 00:23:34.926 | 30.00th=[ 676], 40.00th=[ 685], 50.00th=[ 693], 60.00th=[ 709], 00:23:34.926 | 70.00th=[ 717], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 791], 00:23:34.926 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 930], 99.95th=[ 947], 00:23:34.926 | 99.99th=[ 988] 00:23:34.926 bw ( KiB/s): min=21440, max=22336, per=49.98%, avg=21845.89, stdev=275.49, samples=19 00:23:34.926 iops : min= 5360, max= 5584, avg=5461.47, stdev=68.87, samples=19 00:23:34.926 lat (usec) : 750=86.13%, 1000=13.86% 00:23:34.926 lat (msec) : 2=0.01% 00:23:34.926 cpu : usr=90.25%, sys=8.43%, ctx=11, majf=0, minf=0 00:23:34.926 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.926 issued rwts: total=54636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.926 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:34.926 filename1: (groupid=0, jobs=1): err= 0: pid=98353: Tue Oct 29 11:12:39 2024 00:23:34.926 read: IOPS=5463, BW=21.3MiB/s (22.4MB/s)(213MiB/10001msec) 00:23:34.926 slat (usec): min=6, max=107, avg=12.32, stdev= 4.35 00:23:34.926 clat (usec): min=299, max=1516, avg=698.22, stdev=46.70 00:23:34.926 lat (usec): min=306, max=1541, avg=710.54, stdev=47.29 00:23:34.926 clat percentiles (usec): 00:23:34.926 | 1.00th=[ 627], 5.00th=[ 644], 10.00th=[ 652], 20.00th=[ 660], 00:23:34.926 | 30.00th=[ 668], 40.00th=[ 685], 50.00th=[ 693], 60.00th=[ 701], 00:23:34.926 | 70.00th=[ 709], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:23:34.926 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 922], 99.95th=[ 947], 00:23:34.926 | 99.99th=[ 1045] 00:23:34.926 bw ( KiB/s): min=21440, max=22336, per=49.98%, avg=21845.89, stdev=275.49, samples=19 00:23:34.926 iops : min= 5360, max= 5584, avg=5461.47, stdev=68.87, samples=19 00:23:34.926 lat (usec) : 500=0.01%, 750=87.95%, 1000=12.02% 00:23:34.926 lat (msec) : 2=0.02% 00:23:34.926 cpu : usr=90.93%, sys=7.74%, ctx=175, majf=0, minf=0 00:23:34.926 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.926 issued rwts: total=54639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.926 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:34.926 00:23:34.926 Run status group 0 (all jobs): 00:23:34.926 READ: bw=42.7MiB/s (44.8MB/s), 21.3MiB/s-21.3MiB/s (22.4MB/s-22.4MB/s), io=427MiB (448MB), run=10001-10001msec 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 ************************************ 00:23:34.926 END TEST fio_dif_1_multi_subsystems 00:23:34.926 ************************************ 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.926 00:23:34.926 real 0m11.000s 00:23:34.926 user 0m18.789s 00:23:34.926 sys 0m1.862s 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:34.926 11:12:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 11:12:39 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:34.926 11:12:39 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:34.926 11:12:39 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:34.926 11:12:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 ************************************ 00:23:34.926 START TEST fio_dif_rand_params 00:23:34.926 ************************************ 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 bdev_null0 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.926 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:34.927 [2024-10-29 11:12:39.856979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:34.927 { 00:23:34.927 "params": { 00:23:34.927 "name": "Nvme$subsystem", 00:23:34.927 "trtype": "$TEST_TRANSPORT", 00:23:34.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.927 "adrfam": "ipv4", 00:23:34.927 "trsvcid": "$NVMF_PORT", 00:23:34.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.927 "hdgst": ${hdgst:-false}, 00:23:34.927 "ddgst": ${ddgst:-false} 00:23:34.927 }, 00:23:34.927 "method": "bdev_nvme_attach_controller" 00:23:34.927 } 00:23:34.927 EOF 00:23:34.927 )") 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:34.927 "params": { 00:23:34.927 "name": "Nvme0", 00:23:34.927 "trtype": "tcp", 00:23:34.927 "traddr": "10.0.0.3", 00:23:34.927 "adrfam": "ipv4", 00:23:34.927 "trsvcid": "4420", 00:23:34.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:34.927 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:34.927 "hdgst": false, 00:23:34.927 "ddgst": false 00:23:34.927 }, 00:23:34.927 "method": "bdev_nvme_attach_controller" 00:23:34.927 }' 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:34.927 11:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.927 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:34.927 ... 00:23:34.927 fio-3.35 00:23:34.927 Starting 3 threads 00:23:40.203 00:23:40.203 filename0: (groupid=0, jobs=1): err= 0: pid=98512: Tue Oct 29 11:12:45 2024 00:23:40.203 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(182MiB/5007msec) 00:23:40.203 slat (nsec): min=6644, max=45795, avg=14594.54, stdev=4915.90 00:23:40.203 clat (usec): min=7266, max=11845, avg=10291.56, stdev=345.70 00:23:40.203 lat (usec): min=7279, max=11864, avg=10306.16, stdev=346.23 00:23:40.203 clat percentiles (usec): 00:23:40.203 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:23:40.203 | 30.00th=[10159], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:23:40.203 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:23:40.203 | 99.00th=[11469], 99.50th=[11600], 99.90th=[11863], 99.95th=[11863], 00:23:40.203 | 99.99th=[11863] 00:23:40.203 bw ( KiB/s): min=36096, max=38400, per=33.30%, avg=37156.20, stdev=733.07, samples=10 00:23:40.203 iops : min= 282, max= 300, avg=290.20, stdev= 5.69, samples=10 00:23:40.203 lat (msec) : 10=4.40%, 20=95.60% 00:23:40.203 cpu : usr=91.27%, sys=8.27%, ctx=5, majf=0, minf=0 00:23:40.203 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:40.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.203 issued rwts: total=1455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:40.203 filename0: (groupid=0, jobs=1): err= 0: pid=98513: Tue Oct 29 11:12:45 2024 00:23:40.203 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(182MiB/5008msec) 00:23:40.203 slat (nsec): min=6679, max=92700, avg=14328.94, stdev=5111.88 00:23:40.203 clat (usec): min=7256, max=11865, avg=10293.16, stdev=347.36 00:23:40.203 lat (usec): min=7268, max=11876, avg=10307.49, stdev=347.99 00:23:40.203 clat percentiles (usec): 00:23:40.203 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:23:40.203 | 30.00th=[10159], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:23:40.203 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:23:40.203 | 99.00th=[11469], 99.50th=[11600], 99.90th=[11863], 99.95th=[11863], 00:23:40.203 | 99.99th=[11863] 00:23:40.203 bw ( KiB/s): min=36096, max=38400, per=33.30%, avg=37156.20, stdev=733.07, samples=10 00:23:40.203 iops : min= 282, max= 300, avg=290.20, stdev= 5.69, samples=10 00:23:40.203 lat (msec) : 10=4.54%, 20=95.46% 00:23:40.203 cpu : usr=90.81%, sys=8.69%, ctx=41, majf=0, minf=0 00:23:40.203 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:40.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.203 issued rwts: total=1455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:40.203 filename0: (groupid=0, jobs=1): err= 0: pid=98514: Tue Oct 29 11:12:45 2024 00:23:40.203 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(182MiB/5006msec) 00:23:40.203 slat (nsec): min=6511, max=50245, avg=13855.51, stdev=4996.18 00:23:40.203 clat (usec): min=7982, max=11900, avg=10288.12, stdev=350.21 00:23:40.203 lat (usec): min=7992, max=11919, avg=10301.97, stdev=351.18 00:23:40.203 clat percentiles (usec): 00:23:40.203 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:23:40.203 | 30.00th=[10159], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:23:40.203 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:23:40.203 | 99.00th=[11469], 99.50th=[11600], 99.90th=[11863], 99.95th=[11863], 00:23:40.203 | 99.99th=[11863] 00:23:40.203 bw ( KiB/s): min=36864, max=37632, per=33.32%, avg=37171.20, stdev=396.59, samples=10 00:23:40.203 iops : min= 288, max= 294, avg=290.40, stdev= 3.10, samples=10 00:23:40.203 lat (msec) : 10=6.32%, 20=93.68% 00:23:40.203 cpu : usr=91.49%, sys=7.93%, ctx=14, majf=0, minf=0 00:23:40.203 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:40.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.203 issued rwts: total=1455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:40.203 00:23:40.203 Run status group 0 (all jobs): 00:23:40.203 READ: bw=109MiB/s (114MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=546MiB (572MB), run=5006-5008msec 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.203 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.463 bdev_null0 00:23:40.463 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.463 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:40.463 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 [2024-10-29 11:12:45.729750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 bdev_null1 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 bdev_null2 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.464 { 00:23:40.464 "params": { 00:23:40.464 "name": "Nvme$subsystem", 00:23:40.464 "trtype": "$TEST_TRANSPORT", 00:23:40.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.464 "adrfam": "ipv4", 00:23:40.464 "trsvcid": "$NVMF_PORT", 00:23:40.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.464 "hdgst": ${hdgst:-false}, 00:23:40.464 "ddgst": ${ddgst:-false} 00:23:40.464 }, 00:23:40.464 "method": "bdev_nvme_attach_controller" 00:23:40.464 } 00:23:40.464 EOF 00:23:40.464 )") 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.464 { 00:23:40.464 "params": { 00:23:40.464 "name": "Nvme$subsystem", 00:23:40.464 "trtype": "$TEST_TRANSPORT", 00:23:40.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.464 "adrfam": "ipv4", 00:23:40.464 "trsvcid": "$NVMF_PORT", 00:23:40.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.464 "hdgst": ${hdgst:-false}, 00:23:40.464 "ddgst": ${ddgst:-false} 00:23:40.464 }, 00:23:40.464 "method": "bdev_nvme_attach_controller" 00:23:40.464 } 00:23:40.464 EOF 00:23:40.464 )") 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.464 { 00:23:40.464 "params": { 00:23:40.464 "name": "Nvme$subsystem", 00:23:40.464 "trtype": "$TEST_TRANSPORT", 00:23:40.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.464 "adrfam": "ipv4", 00:23:40.464 "trsvcid": "$NVMF_PORT", 00:23:40.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.464 "hdgst": ${hdgst:-false}, 00:23:40.464 "ddgst": ${ddgst:-false} 00:23:40.464 }, 00:23:40.464 "method": "bdev_nvme_attach_controller" 00:23:40.464 } 00:23:40.464 EOF 00:23:40.464 )") 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:40.464 11:12:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:40.464 "params": { 00:23:40.464 "name": "Nvme0", 00:23:40.464 "trtype": "tcp", 00:23:40.464 "traddr": "10.0.0.3", 00:23:40.464 "adrfam": "ipv4", 00:23:40.464 "trsvcid": "4420", 00:23:40.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.464 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:40.464 "hdgst": false, 00:23:40.464 "ddgst": false 00:23:40.464 }, 00:23:40.464 "method": "bdev_nvme_attach_controller" 00:23:40.464 },{ 00:23:40.464 "params": { 00:23:40.464 "name": "Nvme1", 00:23:40.464 "trtype": "tcp", 00:23:40.464 "traddr": "10.0.0.3", 00:23:40.464 "adrfam": "ipv4", 00:23:40.464 "trsvcid": "4420", 00:23:40.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.464 "hdgst": false, 00:23:40.464 "ddgst": false 00:23:40.464 }, 00:23:40.465 "method": "bdev_nvme_attach_controller" 00:23:40.465 },{ 00:23:40.465 "params": { 00:23:40.465 "name": "Nvme2", 00:23:40.465 "trtype": "tcp", 00:23:40.465 "traddr": "10.0.0.3", 00:23:40.465 "adrfam": "ipv4", 00:23:40.465 "trsvcid": "4420", 00:23:40.465 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.465 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.465 "hdgst": false, 00:23:40.465 "ddgst": false 00:23:40.465 }, 00:23:40.465 "method": "bdev_nvme_attach_controller" 00:23:40.465 }' 00:23:40.465 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:40.465 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:40.465 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.465 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.465 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:40.465 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:40.465 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:40.465 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:40.465 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:40.465 11:12:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.724 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:40.724 ... 00:23:40.724 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:40.724 ... 00:23:40.724 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:40.724 ... 00:23:40.724 fio-3.35 00:23:40.724 Starting 24 threads 00:23:55.660 fio: pid=98619, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.661 [2024-10-29 11:12:58.759713] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2c694a0 via correct icresp 00:23:55.661 [2024-10-29 11:12:58.759776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2c694a0 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=33353728, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=62767104, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=9048064, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=34938880, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=31834112, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=52871168, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=46837760, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=10993664, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=1417216, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=45432832, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=51412992, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=63549440, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=26009600, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=61190144, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=32485376, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=5083136, buflen=4096 00:23:55.661 [2024-10-29 11:12:58.767718] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2c69860 via correct icresp 00:23:55.661 [2024-10-29 11:12:58.767757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2c69860 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=21385216, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=24121344, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=65490944, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=54857728, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=65241088, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=26177536, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=14835712, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=1720320, buflen=4096 00:23:55.661 fio: pid=98622, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=40833024, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=2764800, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=59269120, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=65015808, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=8192, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=37806080, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=15462400, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=17604608, buflen=4096 00:23:55.661 [2024-10-29 11:12:58.779726] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2c69680 via correct icresp 00:23:55.661 [2024-10-29 11:12:58.779764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2c69680 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=36966400, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=55111680, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=38477824, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=23859200, buflen=4096 00:23:55.661 fio: pid=98614, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=58073088, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=59396096, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=56569856, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=61960192, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=41091072, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=42139648, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=4927488, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=66584576, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=53096448, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=13066240, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=66953216, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=55177216, buflen=4096 00:23:55.661 [2024-10-29 11:12:58.783860] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2c69c20 via correct icresp 00:23:55.661 [2024-10-29 11:12:58.783898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2c69c20 00:23:55.661 fio: pid=98612, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.661 fio: pid=98618, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.661 [2024-10-29 11:12:58.784000] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4672000 via correct icresp 00:23:55.661 [2024-10-29 11:12:58.784036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4672000 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=23298048, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=37404672, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=53178368, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=782336, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=39247872, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=13889536, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=13529088, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=66187264, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=42729472, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=21630976, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=59961344, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=8798208, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=32940032, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=12214272, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=58417152, buflen=4096 00:23:55.661 fio: io_u error on file Nvme0n1: Input/output error: read offset=7098368, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=47452160, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=34287616, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=831488, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=53870592, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=37478400, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=2457600, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=40656896, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=11014144, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=48861184, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=17141760, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=55959552, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=27402240, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=38703104, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=35835904, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=1691648, buflen=4096 00:23:55.661 fio: io_u error on file Nvme1n1: Input/output error: read offset=61239296, buflen=4096 00:23:55.661 [2024-10-29 11:12:58.788820] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x2c692c0 via correct icresp 00:23:55.661 [2024-10-29 11:12:58.788854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2c692c0 00:23:55.661 fio: io_u error on file Nvme2n1: Input/output error: read offset=19333120, buflen=4096 00:23:55.661 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=35749888, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=24522752, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=45363200, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=46178304, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=12845056, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=48640000, buflen=4096 00:23:55.662 fio: pid=98626, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=48943104, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=19562496, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=10452992, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=7839744, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=52715520, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=51306496, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=22798336, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=41910272, buflen=4096 00:23:55.662 [2024-10-29 11:12:58.789261] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x46743c0 via correct icresp 00:23:55.662 [2024-10-29 11:12:58.789387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x46743c0 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=28934144, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=47759360, buflen=4096 00:23:55.662 fio: pid=98623, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=5263360, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=64847872, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=737280, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=36368384, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=43036672, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=52785152, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=36683776, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=37040128, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=48943104, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=40386560, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=24539136, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=50425856, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=10510336, buflen=4096 00:23:55.662 fio: io_u error on file Nvme1n1: Input/output error: read offset=27250688, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=18067456, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=5357568, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=50810880, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=55627776, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=5726208, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=57307136, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=59633664, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=34484224, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=48857088, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=815104, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=50061312, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=12541952, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=29999104, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=62156800, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=3403776, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=46788608, buflen=4096 00:23:55.662 fio: pid=98625, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.662 [2024-10-29 11:12:58.794708] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x46741e0 via correct icresp 00:23:55.662 [2024-10-29 11:12:58.794747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x46741e0 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=3481600, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=56184832, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=66342912, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=24215552, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=43143168, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=66883584, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=51359744, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=10039296, buflen=4096 00:23:55.662 fio: pid=98627, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=26849280, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=11550720, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=26439680, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=22265856, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=16035840, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=34263040, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=64954368, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=65003520, buflen=4096 00:23:55.662 [2024-10-29 11:12:58.798729] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4674000 via correct icresp 00:23:55.662 [2024-10-29 11:12:58.798921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4674000 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=15663104, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=9408512, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=15581184, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=27107328, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=55336960, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=39804928, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=46354432, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=34320384, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=9441280, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=26243072, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=39927808, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=55951360, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=46231552, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=23080960, buflen=4096 00:23:55.662 fio: pid=98610, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=37081088, buflen=4096 00:23:55.662 fio: io_u error on file Nvme0n1: Input/output error: read offset=33689600, buflen=4096 00:23:55.662 [2024-10-29 11:12:58.802673] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4674b40 via correct icresp 00:23:55.662 [2024-10-29 11:12:58.802862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4674b40 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=64974848, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=51191808, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=12398592, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=61280256, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=48734208, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=11575296, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=52445184, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=60080128, buflen=4096 00:23:55.662 fio: io_u error on file Nvme2n1: Input/output error: read offset=44896256, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=53538816, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=62349312, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=37318656, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=2191360, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=40357888, buflen=4096 00:23:55.663 fio: pid=98629, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=4063232, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=26181632, buflen=4096 00:23:55.663 [2024-10-29 11:12:58.806853] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x46745a0 via correct icresp 00:23:55.663 [2024-10-29 11:12:58.807047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x46745a0 00:23:55.663 [2024-10-29 11:12:58.806866] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4674f00 via correct icresp 00:23:55.663 [2024-10-29 11:12:58.807342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2c681e0 (9): Bad file descriptor 00:23:55.663 fio: pid=98621, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=53186560, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=34996224, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=37572608, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=19476480, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=27541504, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=44208128, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=36532224, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=38891520, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=913408, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=58646528, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=18313216, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=24666112, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=58052608, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=48078848, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=4517888, buflen=4096 00:23:55.663 fio: io_u error on file Nvme1n1: Input/output error: read offset=50819072, buflen=4096 00:23:55.663 [2024-10-29 11:12:58.807761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4674f00 00:23:55.663 fio: pid=98632, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=38592512, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=49111040, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=16941056, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=34516992, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=16191488, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=10518528, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=50724864, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=65597440, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=49045504, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=24788992, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=33144832, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=9060352, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=38088704, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=60559360, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=40730624, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=9224192, buflen=4096 00:23:55.663 [2024-10-29 11:12:58.811092] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x46750e0 via correct icresp 00:23:55.663 [2024-10-29 11:12:58.811103] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x46752c0 via correct icresp 00:23:55.663 [2024-10-29 11:12:58.811101] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4674780 via correct icresp 00:23:55.663 [2024-10-29 11:12:58.811139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x46750e0 00:23:55.663 [2024-10-29 11:12:58.811117] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4674960 via correct icresp 00:23:55.663 [2024-10-29 11:12:58.811159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x46752c0 00:23:55.663 [2024-10-29 11:12:58.811360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4674780 00:23:55.663 [2024-10-29 11:12:58.811420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4674960 00:23:55.663 [2024-10-29 11:12:58.811494] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4674d20 via correct icresp 00:23:55.663 [2024-10-29 11:12:58.811520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4674d20 00:23:55.663 [2024-10-29 11:12:58.811522] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x4675680 via correct icresp 00:23:55.663 [2024-10-29 11:12:58.811562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x4675680 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=53342208, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=39178240, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=23973888, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=38977536, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=12337152, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=4063232, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=63217664, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=3981312, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=36765696, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=58126336, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=41697280, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=16314368, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=13213696, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=12914688, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=30134272, buflen=4096 00:23:55.663 fio: io_u error on file Nvme0n1: Input/output error: read offset=38567936, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=44494848, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=7815168, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=19873792, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=6111232, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=16973824, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=49205248, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=31784960, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=7819264, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=59662336, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=34951168, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=1232896, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=54398976, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=61435904, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=61767680, buflen=4096 00:23:55.663 fio: io_u error on file Nvme2n1: Input/output error: read offset=40538112, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=51154944, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=51261440, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=23396352, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=57282560, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=21434368, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=10530816, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=2691072, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=15687680, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=49655808, buflen=4096 00:23:55.664 fio: pid=98616, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.664 fio: pid=98631, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.664 fio: pid=98613, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.664 fio: pid=98628, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=2039808, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=37330944, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=58757120, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=49201152, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=32235520, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=66932736, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=6856704, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=49119232, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=23162880, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=65122304, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=3477504, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=5808128, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=32575488, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=36069376, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=32911360, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=9801728, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=14749696, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=32546816, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=36941824, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=29216768, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=45797376, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=45506560, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=6017024, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=22528000, buflen=4096 00:23:55.664 [2024-10-29 11:12:58.811107] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x46754a0 via correct icresp 00:23:55.664 [2024-10-29 11:12:58.812005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x46754a0 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=26419200, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=56233984, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=7028736, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=57393152, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=66609152, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=49180672, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=20025344, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=44130304, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=23453696, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=34766848, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=36364288, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=32235520, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=32780288, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=54685696, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=13651968, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=10285056, buflen=4096 00:23:55.664 fio: pid=98615, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=38490112, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=36057088, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=62701568, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=29208576, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=61161472, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=13312000, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=55738368, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=11722752, buflen=4096 00:23:55.664 fio: pid=98630, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=51109888, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=53731328, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=39395328, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=3325952, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=3260416, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=49025024, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=11665408, buflen=4096 00:23:55.664 fio: io_u error on file Nvme2n1: Input/output error: read offset=10444800, buflen=4096 00:23:55.664 fio: pid=98611, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=59994112, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=62394368, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=12816384, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=16596992, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=20307968, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=40996864, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=52510720, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=36761600, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=24530944, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=22044672, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=32223232, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=54571008, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=2121728, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=12918784, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=28672, buflen=4096 00:23:55.664 fio: io_u error on file Nvme0n1: Input/output error: read offset=18472960, buflen=4096 00:23:55.664 [2024-10-29 11:12:58.812853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2c68960 (9): Bad file descriptor 00:23:55.664 [2024-10-29 11:12:58.813934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2c68000 (9): Bad file descriptor 00:23:55.664 00:23:55.664 filename0: (groupid=0, jobs=1): err= 0: pid=98609: Tue Oct 29 11:12:58 2024 00:23:55.664 read: IOPS=1235, BW=4943KiB/s (5062kB/s)(48.3MiB/10010msec) 00:23:55.664 slat (usec): min=5, max=8019, avg=17.73, stdev=238.61 00:23:55.664 clat (usec): min=691, max=35859, avg=12799.29, stdev=4747.39 00:23:55.664 lat (usec): min=706, max=35868, avg=12817.02, stdev=4753.80 00:23:55.664 clat percentiles (usec): 00:23:55.664 | 1.00th=[ 1844], 5.00th=[ 3261], 10.00th=[ 9110], 20.00th=[10945], 00:23:55.664 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:23:55.664 | 70.00th=[12518], 80.00th=[13698], 90.00th=[21365], 95.00th=[23725], 00:23:55.664 | 99.00th=[23987], 99.50th=[23987], 99.90th=[24773], 99.95th=[35390], 00:23:55.664 | 99.99th=[35914] 00:23:55.664 bw ( KiB/s): min= 4192, max= 6864, per=22.05%, avg=4941.60, stdev=561.96, samples=20 00:23:55.664 iops : min= 1048, max= 1716, avg=1235.40, stdev=140.49, samples=20 00:23:55.664 lat (usec) : 750=0.02%, 1000=0.10% 00:23:55.664 lat (msec) : 2=2.14%, 4=4.04%, 10=6.70%, 20=73.17%, 50=13.83% 00:23:55.665 cpu : usr=30.57%, sys=2.60%, ctx=856, majf=0, minf=9 00:23:55.665 IO depths : 1=2.7%, 2=8.7%, 4=24.5%, 8=54.6%, 16=9.5%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=12370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98610: Tue Oct 29 11:12:58 2024 00:23:55.665 cpu : usr=0.00%, sys=0.00%, ctx=16, majf=0, minf=0 00:23:55.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98611: Tue Oct 29 11:12:58 2024 00:23:55.665 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98612: Tue Oct 29 11:12:58 2024 00:23:55.665 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98613: Tue Oct 29 11:12:58 2024 00:23:55.665 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98614: Tue Oct 29 11:12:58 2024 00:23:55.665 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98615: Tue Oct 29 11:12:58 2024 00:23:55.665 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98616: Tue Oct 29 11:12:58 2024 00:23:55.665 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename1: (groupid=0, jobs=1): err= 0: pid=98617: Tue Oct 29 11:12:58 2024 00:23:55.665 read: IOPS=1354, BW=5419KiB/s (5549kB/s)(53.0MiB/10010msec) 00:23:55.665 slat (usec): min=4, max=8031, avg=15.42, stdev=131.16 00:23:55.665 clat (usec): min=562, max=35586, avg=11689.38, stdev=5422.11 00:23:55.665 lat (usec): min=571, max=35594, avg=11704.80, stdev=5422.45 00:23:55.665 clat percentiles (usec): 00:23:55.665 | 1.00th=[ 1745], 5.00th=[ 2311], 10.00th=[ 3720], 20.00th=[ 7570], 00:23:55.665 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[11863], 60.00th=[13042], 00:23:55.665 | 70.00th=[14091], 80.00th=[15795], 90.00th=[18482], 95.00th=[22152], 00:23:55.665 | 99.00th=[23987], 99.50th=[25035], 99.90th=[26870], 99.95th=[31589], 00:23:55.665 | 99.99th=[35390] 00:23:55.665 bw ( KiB/s): min= 4048, max=14730, per=24.15%, avg=5412.10, stdev=2246.80, samples=20 00:23:55.665 iops : min= 1012, max= 3682, avg=1353.00, stdev=561.59, samples=20 00:23:55.665 lat (usec) : 750=0.05%, 1000=0.14% 00:23:55.665 lat (msec) : 2=2.99%, 4=7.17%, 10=28.41%, 20=53.17%, 50=8.07% 00:23:55.665 cpu : usr=44.64%, sys=4.05%, ctx=1521, majf=0, minf=9 00:23:55.665 IO depths : 1=2.1%, 2=7.8%, 4=23.6%, 8=56.1%, 16=10.4%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=13561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98618: Tue Oct 29 11:12:58 2024 00:23:55.665 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98619: Tue Oct 29 11:12:58 2024 00:23:55.665 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.665 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename1: (groupid=0, jobs=1): err= 0: pid=98620: Tue Oct 29 11:12:58 2024 00:23:55.665 read: IOPS=1287, BW=5149KiB/s (5273kB/s)(50.3MiB/10012msec) 00:23:55.665 slat (usec): min=5, max=8018, avg=15.68, stdev=150.95 00:23:55.665 clat (usec): min=425, max=39744, avg=12314.49, stdev=5247.35 00:23:55.665 lat (usec): min=433, max=39753, avg=12330.16, stdev=5247.23 00:23:55.665 clat percentiles (usec): 00:23:55.665 | 1.00th=[ 1893], 5.00th=[ 2999], 10.00th=[ 6128], 20.00th=[ 8094], 00:23:55.665 | 30.00th=[ 9503], 40.00th=[10945], 50.00th=[11994], 60.00th=[12911], 00:23:55.665 | 70.00th=[14615], 80.00th=[15795], 90.00th=[20055], 95.00th=[23725], 00:23:55.665 | 99.00th=[23987], 99.50th=[25822], 99.90th=[31851], 99.95th=[35390], 00:23:55.665 | 99.99th=[39584] 00:23:55.665 bw ( KiB/s): min= 3904, max=11072, per=22.97%, avg=5149.20, stdev=1437.45, samples=20 00:23:55.665 iops : min= 976, max= 2768, avg=1287.30, stdev=359.36, samples=20 00:23:55.665 lat (usec) : 500=0.02%, 750=0.07%, 1000=0.09% 00:23:55.665 lat (msec) : 2=1.31%, 4=5.11%, 10=26.62%, 20=56.67%, 50=10.12% 00:23:55.665 cpu : usr=39.31%, sys=3.43%, ctx=1176, majf=0, minf=9 00:23:55.665 IO depths : 1=2.6%, 2=8.6%, 4=24.3%, 8=54.7%, 16=9.7%, 32=0.0%, >=64=0.0% 00:23:55.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.665 issued rwts: total=12889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.665 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.665 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98621: Tue Oct 29 11:12:58 2024 00:23:55.666 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:23:55.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.666 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98622: Tue Oct 29 11:12:58 2024 00:23:55.666 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.666 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98623: Tue Oct 29 11:12:58 2024 00:23:55.666 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.666 filename1: (groupid=0, jobs=1): err= 0: pid=98624: Tue Oct 29 11:12:58 2024 00:23:55.666 read: IOPS=1244, BW=4977KiB/s (5096kB/s)(48.6MiB/10008msec) 00:23:55.666 slat (usec): min=5, max=10023, avg=17.64, stdev=202.50 00:23:55.666 clat (usec): min=435, max=34595, avg=12730.79, stdev=4553.72 00:23:55.666 lat (usec): min=443, max=34604, avg=12748.43, stdev=4554.17 00:23:55.666 clat percentiles (usec): 00:23:55.666 | 1.00th=[ 2024], 5.00th=[ 6259], 10.00th=[ 7963], 20.00th=[ 9241], 00:23:55.666 | 30.00th=[10159], 40.00th=[11731], 50.00th=[12125], 60.00th=[13566], 00:23:55.666 | 70.00th=[14615], 80.00th=[15795], 90.00th=[17957], 95.00th=[22938], 00:23:55.666 | 99.00th=[23987], 99.50th=[24249], 99.90th=[27657], 99.95th=[31065], 00:23:55.666 | 99.99th=[34341] 00:23:55.666 bw ( KiB/s): min= 4208, max= 7072, per=22.19%, avg=4974.40, stdev=614.12, samples=20 00:23:55.666 iops : min= 1052, max= 1768, avg=1243.60, stdev=153.53, samples=20 00:23:55.666 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.02% 00:23:55.666 lat (msec) : 2=0.86%, 4=2.43%, 10=25.76%, 20=62.76%, 50=8.13% 00:23:55.666 cpu : usr=41.02%, sys=3.37%, ctx=1350, majf=0, minf=9 00:23:55.666 IO depths : 1=2.6%, 2=8.4%, 4=23.9%, 8=55.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:23:55.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 issued rwts: total=12452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.666 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98625: Tue Oct 29 11:12:58 2024 00:23:55.666 read: IOPS=2101, BW=8378KiB/s (8580kB/s)(18.9MiB/2304msec) 00:23:55.666 slat (usec): min=4, max=8019, avg=51.22, stdev=576.85 00:23:55.666 clat (usec): min=696, max=35756, avg=7146.34, stdev=4973.54 00:23:55.666 lat (usec): min=705, max=35772, avg=7197.70, stdev=5043.11 00:23:55.666 clat percentiles (usec): 00:23:55.666 | 1.00th=[ 1680], 5.00th=[ 1713], 10.00th=[ 1745], 20.00th=[ 1778], 00:23:55.666 | 30.00th=[ 2278], 40.00th=[ 3490], 50.00th=[ 7963], 60.00th=[ 9503], 00:23:55.666 | 70.00th=[ 9896], 80.00th=[11863], 90.00th=[11994], 95.00th=[12256], 00:23:55.666 | 99.00th=[20317], 99.50th=[21627], 99.90th=[35914], 99.95th=[35914], 00:23:55.666 | 99.99th=[35914] 00:23:55.666 bw ( KiB/s): min= 7152, max= 9184, per=36.36%, avg=8148.00, stdev=953.88, samples=4 00:23:55.666 iops : min= 1788, max= 2296, avg=2037.00, stdev=238.47, samples=4 00:23:55.666 lat (usec) : 750=0.08%, 1000=0.04% 00:23:55.666 lat (msec) : 2=28.05%, 4=12.04%, 10=30.17%, 20=26.62%, 50=2.66% 00:23:55.666 cpu : usr=31.74%, sys=3.52%, ctx=216, majf=0, minf=9 00:23:55.666 IO depths : 1=3.1%, 2=9.4%, 4=24.9%, 8=53.2%, 16=9.3%, 32=0.0%, >=64=0.0% 00:23:55.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 complete : 0=0.1%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 issued rwts: total=4842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.666 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98626: Tue Oct 29 11:12:58 2024 00:23:55.666 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.666 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98627: Tue Oct 29 11:12:58 2024 00:23:55.666 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.666 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98628: Tue Oct 29 11:12:58 2024 00:23:55.666 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.666 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.666 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.666 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98629: Tue Oct 29 11:12:58 2024 00:23:55.666 cpu : usr=0.00%, sys=0.00%, ctx=16, majf=0, minf=0 00:23:55.666 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.667 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.667 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.667 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98630: Tue Oct 29 11:12:58 2024 00:23:55.667 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.667 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.667 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.667 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.667 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98631: Tue Oct 29 11:12:58 2024 00:23:55.667 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.667 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.667 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.667 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.667 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=98632: Tue Oct 29 11:12:58 2024 00:23:55.667 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:23:55.667 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:55.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.667 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:55.667 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:55.667 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:55.667 00:23:55.667 Run status group 0 (all jobs): 00:23:55.667 READ: bw=21.9MiB/s (22.9MB/s), 4943KiB/s-8378KiB/s (5062kB/s-8580kB/s), io=219MiB (230MB), run=2304-10012msec 00:23:55.667 11:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # trap - ERR 00:23:55.667 11:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # print_backtrace 00:23:55.667 11:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1155 -- # [[ ehxBET =~ e ]] 00:23:55.667 11:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1157 -- # args=('/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' '/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/dev/fd/62' 'fio_dif_rand_params' 'fio_dif_rand_params' '--iso' '--transport=tcp') 00:23:55.667 11:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1157 -- # local args 00:23:55.667 11:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1159 -- # xtrace_disable 00:23:55.667 11:12:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:55.667 ========== Backtrace start: ========== 00:23:55.667 00:23:55.667 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1354 -> fio_plugin(["/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev"],["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:23:55.667 ... 00:23:55.667 1349 break 00:23:55.667 1350 fi 00:23:55.667 1351 done 00:23:55.667 1352 00:23:55.667 1353 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:23:55.667 1354 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:23:55.667 1355 } 00:23:55.667 1356 00:23:55.667 1357 function fio_bdev() { 00:23:55.667 1358 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:23:55.667 1359 } 00:23:55.667 ... 00:23:55.667 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1358 -> fio_bdev(["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:23:55.667 ... 00:23:55.667 1353 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:23:55.667 1354 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:23:55.667 1355 } 00:23:55.667 1356 00:23:55.667 1357 function fio_bdev() { 00:23:55.667 1358 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:23:55.667 1359 } 00:23:55.667 1360 00:23:55.667 1361 function fio_nvme() { 00:23:55.667 1362 fio_plugin "$rootdir/build/fio/spdk_nvme" "$@" 00:23:55.667 1363 } 00:23:55.667 ... 00:23:55.667 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:82 -> fio(["/dev/fd/62"]) 00:23:55.667 ... 00:23:55.667 77 FIO 00:23:55.667 78 done 00:23:55.667 79 } 00:23:55.667 80 00:23:55.667 81 fio() { 00:23:55.667 => 82 fio_bdev --ioengine=spdk_bdev --spdk_json_conf "$@" <(gen_fio_conf) 00:23:55.667 83 } 00:23:55.667 84 00:23:55.667 85 fio_dif_1() { 00:23:55.667 86 create_subsystems 0 00:23:55.667 87 fio <(create_json_sub_conf 0) 00:23:55.667 ... 00:23:55.667 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:112 -> fio_dif_rand_params([]) 00:23:55.667 ... 00:23:55.667 107 destroy_subsystems 0 00:23:55.667 108 00:23:55.667 109 NULL_DIF=2 bs=4k numjobs=8 iodepth=16 runtime="" files=2 00:23:55.667 110 00:23:55.667 111 create_subsystems 0 1 2 00:23:55.667 => 112 fio <(create_json_sub_conf 0 1 2) 00:23:55.667 113 destroy_subsystems 0 1 2 00:23:55.667 114 00:23:55.667 115 NULL_DIF=1 bs=8k,16k,128k numjobs=2 iodepth=8 runtime=5 files=1 00:23:55.667 116 00:23:55.667 117 create_subsystems 0 1 00:23:55.667 ... 00:23:55.667 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1127 -> run_test(["fio_dif_rand_params"],["fio_dif_rand_params"]) 00:23:55.667 ... 00:23:55.667 1122 timing_enter $test_name 00:23:55.667 1123 echo "************************************" 00:23:55.667 1124 echo "START TEST $test_name" 00:23:55.667 1125 echo "************************************" 00:23:55.667 1126 xtrace_restore 00:23:55.667 1127 time "$@" 00:23:55.667 1128 xtrace_disable 00:23:55.667 1129 echo "************************************" 00:23:55.667 1130 echo "END TEST $test_name" 00:23:55.667 1131 echo "************************************" 00:23:55.667 1132 timing_exit $test_name 00:23:55.667 ... 00:23:55.667 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:143 -> main(["--transport=tcp"],["--iso"]) 00:23:55.667 ... 00:23:55.667 138 00:23:55.667 139 create_transport 00:23:55.667 140 00:23:55.667 141 run_test "fio_dif_1_default" fio_dif_1 00:23:55.667 142 run_test "fio_dif_1_multi_subsystems" fio_dif_1_multi_subsystems 00:23:55.667 => 143 run_test "fio_dif_rand_params" fio_dif_rand_params 00:23:55.667 144 run_test "fio_dif_digest" fio_dif_digest 00:23:55.667 145 00:23:55.667 146 trap - SIGINT SIGTERM EXIT 00:23:55.667 147 nvmftestfini 00:23:55.667 ... 00:23:55.667 00:23:55.667 ========== Backtrace end ========== 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1196 -- # return 0 00:23:55.667 00:23:55.667 real 0m19.187s 00:23:55.667 user 2m7.318s 00:23:55.667 sys 0m3.247s 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1 -- # process_shm --id 0 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@810 -- # type=--id 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@811 -- # id=0 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@822 -- # for n in $shm_files 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:55.667 nvmf_trace.0 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@825 -- # return 0 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1 -- # nvmftestfini 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@121 -- # sync 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.667 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@124 -- # set +e 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.668 rmmod nvme_tcp 00:23:55.668 rmmod nvme_fabrics 00:23:55.668 rmmod nvme_keyring 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@128 -- # set -e 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@129 -- # return 0 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@517 -- # '[' -n 98139 ']' 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@518 -- # killprocess 98139 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@952 -- # '[' -z 98139 ']' 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@956 -- # kill -0 98139 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@957 -- # uname 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98139 00:23:55.668 killing process with pid 98139 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98139' 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@971 -- # kill 98139 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@976 -- # wait 98139 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:55.668 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:55.668 Waiting for block devices as requested 00:23:55.668 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:55.668 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@297 -- # iptr 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@791 -- # iptables-save 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:55.668 11:12:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:55.668 11:13:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:55.668 11:13:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:55.668 11:13:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:55.668 11:13:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:55.668 11:13:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:55.668 11:13:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.668 11:13:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:55.668 11:13:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.668 11:13:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@300 -- # return 0 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@1127 -- # trap - ERR 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@1127 -- # print_backtrace 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@1155 -- # [[ ehxBET =~ e ]] 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@1157 -- # args=('/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh' 'nvmf_dif' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@1157 -- # local args 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@1159 -- # xtrace_disable 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:55.668 ========== Backtrace start: ========== 00:23:55.668 00:23:55.668 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1127 -> run_test(["nvmf_dif"],["/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh"]) 00:23:55.668 ... 00:23:55.668 1122 timing_enter $test_name 00:23:55.668 1123 echo "************************************" 00:23:55.668 1124 echo "START TEST $test_name" 00:23:55.668 1125 echo "************************************" 00:23:55.668 1126 xtrace_restore 00:23:55.668 1127 time "$@" 00:23:55.668 1128 xtrace_disable 00:23:55.668 1129 echo "************************************" 00:23:55.668 1130 echo "END TEST $test_name" 00:23:55.668 1131 echo "************************************" 00:23:55.668 1132 timing_exit $test_name 00:23:55.668 ... 00:23:55.668 in /home/vagrant/spdk_repo/spdk/autotest.sh:285 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:23:55.668 ... 00:23:55.668 280 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:23:55.668 281 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:23:55.668 282 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:23:55.668 283 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:23:55.668 284 fi 00:23:55.668 => 285 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:23:55.668 286 run_test "nvmf_abort_qd_sizes" $rootdir/test/nvmf/target/abort_qd_sizes.sh 00:23:55.668 287 # The keyring tests utilize NVMe/TLS 00:23:55.668 288 run_test "keyring_file" "$rootdir/test/keyring/file.sh" 00:23:55.668 289 if [[ "$CONFIG_HAVE_KEYUTILS" == y ]]; then 00:23:55.668 290 run_test "keyring_linux" "$rootdir/scripts/keyctl-session-wrapper" \ 00:23:55.668 ... 00:23:55.668 00:23:55.668 ========== Backtrace end ========== 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@1196 -- # return 0 00:23:55.668 00:23:55.668 real 0m43.787s 00:23:55.668 user 3m7.899s 00:23:55.668 sys 0m10.772s 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@1 -- # autotest_cleanup 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@1394 -- # local autotest_es=20 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@1395 -- # xtrace_disable 00:23:55.668 11:13:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:07.878 INFO: APP EXITING 00:24:07.878 INFO: killing all VMs 00:24:07.878 INFO: killing vhost app 00:24:07.878 INFO: EXIT DONE 00:24:07.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:07.878 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:07.878 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:08.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:08.136 Cleaning 00:24:08.136 Removing: /var/run/dpdk/spdk0/config 00:24:08.394 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:08.394 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:08.394 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:08.394 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:08.394 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:08.394 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:08.394 Removing: /var/run/dpdk/spdk1/config 00:24:08.394 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:08.394 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:08.394 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:08.394 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:08.394 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:08.394 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:08.394 Removing: /var/run/dpdk/spdk2/config 00:24:08.394 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:08.394 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:08.394 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:08.394 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:08.394 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:08.394 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:08.394 Removing: /var/run/dpdk/spdk3/config 00:24:08.394 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:08.394 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:08.394 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:08.394 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:08.394 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:08.394 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:08.394 Removing: /var/run/dpdk/spdk4/config 00:24:08.394 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:08.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:08.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:08.395 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:08.395 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:08.395 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:08.395 Removing: /dev/shm/nvmf_trace.0 00:24:08.395 Removing: /dev/shm/spdk_tgt_trace.pid69959 00:24:08.395 Removing: /var/run/dpdk/spdk0 00:24:08.395 Removing: /var/run/dpdk/spdk1 00:24:08.395 Removing: /var/run/dpdk/spdk2 00:24:08.395 Removing: /var/run/dpdk/spdk3 00:24:08.395 Removing: /var/run/dpdk/spdk4 00:24:08.395 Removing: /var/run/dpdk/spdk_pid69806 00:24:08.395 Removing: /var/run/dpdk/spdk_pid69959 00:24:08.395 Removing: /var/run/dpdk/spdk_pid70152 00:24:08.395 Removing: /var/run/dpdk/spdk_pid70233 00:24:08.395 Removing: /var/run/dpdk/spdk_pid70253 00:24:08.395 Removing: /var/run/dpdk/spdk_pid70357 00:24:08.395 Removing: /var/run/dpdk/spdk_pid70368 00:24:08.395 Removing: /var/run/dpdk/spdk_pid70502 00:24:08.395 Removing: /var/run/dpdk/spdk_pid70703 00:24:08.395 Removing: /var/run/dpdk/spdk_pid70851 00:24:08.395 Removing: /var/run/dpdk/spdk_pid70928 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71000 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71094 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71166 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71199 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71240 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71304 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71396 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71842 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71888 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71932 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71940 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71994 00:24:08.395 Removing: /var/run/dpdk/spdk_pid71997 00:24:08.395 Removing: /var/run/dpdk/spdk_pid72064 00:24:08.395 Removing: /var/run/dpdk/spdk_pid72067 00:24:08.395 Removing: /var/run/dpdk/spdk_pid72113 00:24:08.395 Removing: /var/run/dpdk/spdk_pid72123 00:24:08.395 Removing: /var/run/dpdk/spdk_pid72163 00:24:08.395 Removing: /var/run/dpdk/spdk_pid72174 00:24:08.395 Removing: /var/run/dpdk/spdk_pid72297 00:24:08.395 Removing: /var/run/dpdk/spdk_pid72327 00:24:08.395 Removing: /var/run/dpdk/spdk_pid72409 00:24:08.395 Removing: /var/run/dpdk/spdk_pid72736 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72748 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72783 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72798 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72808 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72827 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72840 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72856 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72875 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72888 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72904 00:24:08.654 Removing: /var/run/dpdk/spdk_pid72923 00:24:08.655 Removing: /var/run/dpdk/spdk_pid72931 00:24:08.655 Removing: /var/run/dpdk/spdk_pid72952 00:24:08.655 Removing: /var/run/dpdk/spdk_pid72966 00:24:08.655 Removing: /var/run/dpdk/spdk_pid72979 00:24:08.655 Removing: /var/run/dpdk/spdk_pid72994 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73009 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73027 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73037 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73073 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73081 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73116 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73177 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73211 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73215 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73249 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73253 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73255 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73303 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73311 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73345 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73349 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73353 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73368 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73372 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73376 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73391 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73395 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73429 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73450 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73454 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73488 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73492 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73505 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73540 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73546 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73578 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73580 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73593 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73595 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73597 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73610 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73612 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73620 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73696 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73738 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73845 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73886 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73924 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73944 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73960 00:24:08.655 Removing: /var/run/dpdk/spdk_pid73975 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74006 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74022 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74100 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74110 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74149 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74211 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74261 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74285 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74385 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74433 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74460 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74692 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74784 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74807 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74835 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74870 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74898 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74937 00:24:08.655 Removing: /var/run/dpdk/spdk_pid74963 00:24:08.655 Removing: /var/run/dpdk/spdk_pid75351 00:24:08.655 Removing: /var/run/dpdk/spdk_pid75395 00:24:08.655 Removing: /var/run/dpdk/spdk_pid75728 00:24:08.914 Removing: /var/run/dpdk/spdk_pid76185 00:24:08.914 Removing: /var/run/dpdk/spdk_pid76454 00:24:08.914 Removing: /var/run/dpdk/spdk_pid77310 00:24:08.914 Removing: /var/run/dpdk/spdk_pid78209 00:24:08.914 Removing: /var/run/dpdk/spdk_pid78326 00:24:08.914 Removing: /var/run/dpdk/spdk_pid78388 00:24:08.914 Removing: /var/run/dpdk/spdk_pid79788 00:24:08.914 Removing: /var/run/dpdk/spdk_pid80103 00:24:08.914 Removing: /var/run/dpdk/spdk_pid83832 00:24:08.914 Removing: /var/run/dpdk/spdk_pid84192 00:24:08.914 Removing: /var/run/dpdk/spdk_pid84301 00:24:08.914 Removing: /var/run/dpdk/spdk_pid84428 00:24:08.914 Removing: /var/run/dpdk/spdk_pid84449 00:24:08.914 Removing: /var/run/dpdk/spdk_pid84476 00:24:08.915 Removing: /var/run/dpdk/spdk_pid84497 00:24:08.915 Removing: /var/run/dpdk/spdk_pid84589 00:24:08.915 Removing: /var/run/dpdk/spdk_pid84715 00:24:08.915 Removing: /var/run/dpdk/spdk_pid84857 00:24:08.915 Removing: /var/run/dpdk/spdk_pid84931 00:24:08.915 Removing: /var/run/dpdk/spdk_pid85118 00:24:08.915 Removing: /var/run/dpdk/spdk_pid85186 00:24:08.915 Removing: /var/run/dpdk/spdk_pid85266 00:24:08.915 Removing: /var/run/dpdk/spdk_pid85607 00:24:08.915 Removing: /var/run/dpdk/spdk_pid86011 00:24:08.915 Removing: /var/run/dpdk/spdk_pid86012 00:24:08.915 Removing: /var/run/dpdk/spdk_pid86013 00:24:08.915 Removing: /var/run/dpdk/spdk_pid86268 00:24:08.915 Removing: /var/run/dpdk/spdk_pid86515 00:24:08.915 Removing: /var/run/dpdk/spdk_pid86517 00:24:08.915 Removing: /var/run/dpdk/spdk_pid88888 00:24:08.915 Removing: /var/run/dpdk/spdk_pid88894 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89222 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89236 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89256 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89286 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89296 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89380 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89386 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89490 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89496 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89600 00:24:08.915 Removing: /var/run/dpdk/spdk_pid89612 00:24:08.915 Removing: /var/run/dpdk/spdk_pid90054 00:24:08.915 Removing: /var/run/dpdk/spdk_pid90099 00:24:08.915 Removing: /var/run/dpdk/spdk_pid90213 00:24:08.915 Removing: /var/run/dpdk/spdk_pid90291 00:24:08.915 Removing: /var/run/dpdk/spdk_pid90642 00:24:08.915 Removing: /var/run/dpdk/spdk_pid90831 00:24:08.915 Removing: /var/run/dpdk/spdk_pid91244 00:24:08.915 Removing: /var/run/dpdk/spdk_pid91791 00:24:08.915 Removing: /var/run/dpdk/spdk_pid92644 00:24:08.915 Removing: /var/run/dpdk/spdk_pid93280 00:24:08.915 Removing: /var/run/dpdk/spdk_pid93282 00:24:08.915 Removing: /var/run/dpdk/spdk_pid95283 00:24:08.915 Removing: /var/run/dpdk/spdk_pid95336 00:24:08.915 Removing: /var/run/dpdk/spdk_pid95379 00:24:08.915 Removing: /var/run/dpdk/spdk_pid95432 00:24:08.915 Removing: /var/run/dpdk/spdk_pid95533 00:24:08.915 Removing: /var/run/dpdk/spdk_pid95580 00:24:08.915 Removing: /var/run/dpdk/spdk_pid95629 00:24:08.915 Removing: /var/run/dpdk/spdk_pid95679 00:24:08.915 Removing: /var/run/dpdk/spdk_pid96029 00:24:08.915 Removing: /var/run/dpdk/spdk_pid97232 00:24:08.915 Removing: /var/run/dpdk/spdk_pid97366 00:24:08.915 Removing: /var/run/dpdk/spdk_pid97602 00:24:08.915 Removing: /var/run/dpdk/spdk_pid98183 00:24:08.915 Removing: /var/run/dpdk/spdk_pid98343 00:24:08.915 Removing: /var/run/dpdk/spdk_pid98503 00:24:08.915 Removing: /var/run/dpdk/spdk_pid98594 00:24:08.915 Clean 00:24:09.482 11:13:14 nvmf_dif -- common/autotest_common.sh@1451 -- # return 20 00:24:09.482 11:13:14 nvmf_dif -- common/autotest_common.sh@1 -- # : 00:24:09.482 11:13:14 nvmf_dif -- common/autotest_common.sh@1 -- # exit 1 00:24:09.482 11:13:14 -- spdk/autorun.sh@27 -- $ trap - ERR 00:24:09.482 11:13:14 -- spdk/autorun.sh@27 -- $ print_backtrace 00:24:09.482 11:13:14 -- common/autotest_common.sh@1155 -- $ [[ ehxBET =~ e ]] 00:24:09.482 11:13:14 -- common/autotest_common.sh@1157 -- $ args=('/home/vagrant/spdk_repo/autorun-spdk.conf') 00:24:09.482 11:13:14 -- common/autotest_common.sh@1157 -- $ local args 00:24:09.482 11:13:14 -- common/autotest_common.sh@1159 -- $ xtrace_disable 00:24:09.482 11:13:14 -- common/autotest_common.sh@10 -- $ set +x 00:24:09.482 ========== Backtrace start: ========== 00:24:09.482 00:24:09.482 in spdk/autorun.sh:27 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:24:09.482 ... 00:24:09.482 22 trap 'timing_finish || exit 1' EXIT 00:24:09.482 23 00:24:09.482 24 # Runs agent scripts 00:24:09.482 25 $rootdir/autobuild.sh "$conf" 00:24:09.482 26 if ((SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1)); then 00:24:09.482 => 27 sudo -E $rootdir/autotest.sh "$conf" 00:24:09.482 28 fi 00:24:09.483 ... 00:24:09.483 00:24:09.483 ========== Backtrace end ========== 00:24:09.483 11:13:14 -- common/autotest_common.sh@1196 -- $ return 0 00:24:09.483 11:13:14 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:09.483 11:13:14 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:09.483 11:13:14 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:09.483 11:13:14 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:09.483 11:13:14 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:09.753 [Pipeline] } 00:24:09.772 [Pipeline] // timeout 00:24:09.779 [Pipeline] } 00:24:09.796 [Pipeline] // stage 00:24:09.803 [Pipeline] } 00:24:09.807 ERROR: script returned exit code 1 00:24:09.807 Setting overall build result to FAILURE 00:24:09.821 [Pipeline] // catchError 00:24:09.830 [Pipeline] stage 00:24:09.833 [Pipeline] { (Stop VM) 00:24:09.845 [Pipeline] sh 00:24:10.126 + vagrant halt 00:24:12.659 ==> default: Halting domain... 00:24:19.245 [Pipeline] sh 00:24:19.526 + vagrant destroy -f 00:24:22.062 ==> default: Removing domain... 00:24:22.334 [Pipeline] sh 00:24:22.615 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:24:22.624 [Pipeline] } 00:24:22.640 [Pipeline] // stage 00:24:22.645 [Pipeline] } 00:24:22.659 [Pipeline] // dir 00:24:22.665 [Pipeline] } 00:24:22.680 [Pipeline] // wrap 00:24:22.686 [Pipeline] } 00:24:22.699 [Pipeline] // catchError 00:24:22.709 [Pipeline] stage 00:24:22.711 [Pipeline] { (Epilogue) 00:24:22.724 [Pipeline] sh 00:24:23.006 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:24.921 [Pipeline] catchError 00:24:24.924 [Pipeline] { 00:24:24.936 [Pipeline] sh 00:24:25.214 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:25.473 Artifacts sizes are good 00:24:25.483 [Pipeline] } 00:24:25.500 [Pipeline] // catchError 00:24:25.515 [Pipeline] archiveArtifacts 00:24:25.525 Archiving artifacts 00:24:25.735 [Pipeline] cleanWs 00:24:25.747 [WS-CLEANUP] Deleting project workspace... 00:24:25.747 [WS-CLEANUP] Deferred wipeout is used... 00:24:25.754 [WS-CLEANUP] done 00:24:25.756 [Pipeline] } 00:24:25.771 [Pipeline] // stage 00:24:25.777 [Pipeline] } 00:24:25.790 [Pipeline] // node 00:24:25.796 [Pipeline] End of Pipeline 00:24:25.838 Finished: FAILURE